00:00:00.001 Started by upstream project "autotest-per-patch" build number 120656 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.083 Fetching changes from the remote Git repository 00:00:00.085 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.124 Using shallow fetch with depth 1 00:00:00.124 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.124 > git --version # timeout=10 00:00:00.161 > git --version # 'git version 2.39.2' 00:00:00.161 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.162 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.162 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.697 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.708 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.719 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:03.719 > git config core.sparsecheckout # timeout=10 00:00:03.728 > git read-tree -mu HEAD # timeout=10 00:00:03.744 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:03.764 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:03.764 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:03.847 [Pipeline] Start of Pipeline 00:00:03.864 [Pipeline] library 00:00:03.866 Loading library shm_lib@master 00:00:03.866 Library shm_lib@master is cached. Copying from home. 00:00:03.885 [Pipeline] node 00:00:03.898 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.900 [Pipeline] { 00:00:03.911 [Pipeline] catchError 00:00:03.913 [Pipeline] { 00:00:03.927 [Pipeline] wrap 00:00:03.937 [Pipeline] { 00:00:03.943 [Pipeline] stage 00:00:03.944 [Pipeline] { (Prologue) 00:00:04.096 [Pipeline] sh 00:00:04.374 + logger -p user.info -t JENKINS-CI 00:00:04.390 [Pipeline] echo 00:00:04.391 Node: WFP16 00:00:04.399 [Pipeline] sh 00:00:04.693 [Pipeline] setCustomBuildProperty 00:00:04.706 [Pipeline] echo 00:00:04.707 Cleanup processes 00:00:04.713 [Pipeline] sh 00:00:04.999 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.258 3518025 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.272 [Pipeline] sh 00:00:05.554 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.554 ++ grep -v 'sudo pgrep' 00:00:05.554 ++ awk '{print $1}' 00:00:05.554 + sudo kill -9 00:00:05.554 + true 00:00:05.569 [Pipeline] cleanWs 00:00:05.578 [WS-CLEANUP] Deleting project workspace... 00:00:05.578 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.584 [WS-CLEANUP] done 00:00:05.588 [Pipeline] setCustomBuildProperty 00:00:05.602 [Pipeline] sh 00:00:05.889 + sudo git config --global --replace-all safe.directory '*' 00:00:05.966 [Pipeline] nodesByLabel 00:00:05.968 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.978 [Pipeline] httpRequest 00:00:05.983 HttpMethod: GET 00:00:05.983 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:05.987 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:05.990 Response Code: HTTP/1.1 200 OK 00:00:05.990 Success: Status code 200 is in the accepted range: 200,404 00:00:05.991 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.517 [Pipeline] sh 00:00:06.792 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:06.811 [Pipeline] httpRequest 00:00:06.814 HttpMethod: GET 00:00:06.815 URL: http://10.211.164.101/packages/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:06.815 Sending request to url: http://10.211.164.101/packages/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:06.818 Response Code: HTTP/1.1 200 OK 00:00:06.818 Success: Status code 200 is in the accepted range: 200,404 00:00:06.819 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:20.423 [Pipeline] sh 00:00:20.707 + tar --no-same-owner -xf spdk_77a84e60e073c769797deff624cc274e83a9e621.tar.gz 00:00:24.914 [Pipeline] sh 00:00:25.198 + git -C spdk log --oneline -n5 00:00:25.198 77a84e60e nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:00:25.198 2731ac8c5 app/trace: emit owner descriptions 00:00:25.198 c064dc584 trace: rename trace_event's poller_id to owner_id 00:00:25.198 23f700383 trace: add concept of "owner" to trace files 00:00:25.198 67f328f92 trace: rename "per_lcore_history" to just "data" 00:00:25.213 [Pipeline] } 00:00:25.231 [Pipeline] // stage 00:00:25.241 [Pipeline] stage 00:00:25.245 [Pipeline] { (Prepare) 00:00:25.266 [Pipeline] writeFile 00:00:25.283 [Pipeline] sh 00:00:25.565 + logger -p user.info -t JENKINS-CI 00:00:25.579 [Pipeline] sh 00:00:25.864 + logger -p user.info -t JENKINS-CI 00:00:25.876 [Pipeline] sh 00:00:26.158 + cat autorun-spdk.conf 00:00:26.158 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.158 SPDK_TEST_NVMF=1 00:00:26.158 SPDK_TEST_NVME_CLI=1 00:00:26.158 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.158 SPDK_TEST_NVMF_NICS=e810 00:00:26.158 SPDK_TEST_VFIOUSER=1 00:00:26.158 SPDK_RUN_UBSAN=1 00:00:26.158 NET_TYPE=phy 00:00:26.166 RUN_NIGHTLY=0 00:00:26.170 [Pipeline] readFile 00:00:26.194 [Pipeline] withEnv 00:00:26.196 [Pipeline] { 00:00:26.210 [Pipeline] sh 00:00:26.494 + set -ex 00:00:26.494 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:26.494 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:26.494 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.494 ++ SPDK_TEST_NVMF=1 00:00:26.494 ++ SPDK_TEST_NVME_CLI=1 00:00:26.494 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:26.494 ++ SPDK_TEST_NVMF_NICS=e810 00:00:26.494 ++ SPDK_TEST_VFIOUSER=1 00:00:26.494 ++ SPDK_RUN_UBSAN=1 00:00:26.494 ++ NET_TYPE=phy 00:00:26.494 ++ RUN_NIGHTLY=0 00:00:26.494 + case $SPDK_TEST_NVMF_NICS in 00:00:26.494 + DRIVERS=ice 00:00:26.494 + [[ tcp == \r\d\m\a ]] 00:00:26.494 + [[ -n ice ]] 00:00:26.494 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:26.753 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:33.324 rmmod: ERROR: Module irdma is not currently loaded 00:00:33.324 rmmod: ERROR: Module i40iw is not currently loaded 00:00:33.324 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:33.324 + true 00:00:33.324 + for D in $DRIVERS 00:00:33.324 + sudo modprobe ice 00:00:33.324 + exit 0 00:00:33.333 [Pipeline] } 00:00:33.349 [Pipeline] // withEnv 00:00:33.353 [Pipeline] } 00:00:33.366 [Pipeline] // stage 00:00:33.373 [Pipeline] catchError 00:00:33.375 [Pipeline] { 00:00:33.389 [Pipeline] timeout 00:00:33.389 Timeout set to expire in 40 min 00:00:33.390 [Pipeline] { 00:00:33.403 [Pipeline] stage 00:00:33.404 [Pipeline] { (Tests) 00:00:33.418 [Pipeline] sh 00:00:33.695 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.695 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.695 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.695 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:33.695 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.695 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.695 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:33.695 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.695 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:33.695 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:33.695 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:33.695 + source /etc/os-release 00:00:33.695 ++ NAME='Fedora Linux' 00:00:33.695 ++ VERSION='38 (Cloud Edition)' 00:00:33.695 ++ ID=fedora 00:00:33.695 ++ VERSION_ID=38 00:00:33.695 ++ VERSION_CODENAME= 00:00:33.695 ++ PLATFORM_ID=platform:f38 00:00:33.695 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:33.695 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:33.695 ++ LOGO=fedora-logo-icon 00:00:33.695 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:33.695 ++ HOME_URL=https://fedoraproject.org/ 00:00:33.695 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:33.695 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:33.695 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:33.695 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:33.695 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:33.695 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:33.695 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:33.695 ++ SUPPORT_END=2024-05-14 00:00:33.695 ++ VARIANT='Cloud Edition' 00:00:33.695 ++ VARIANT_ID=cloud 00:00:33.695 + uname -a 00:00:33.695 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:33.695 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:36.230 Hugepages 00:00:36.230 node hugesize free / total 00:00:36.230 node0 1048576kB 0 / 0 00:00:36.230 node0 2048kB 0 / 0 00:00:36.230 node1 1048576kB 0 / 0 00:00:36.230 node1 2048kB 0 / 0 00:00:36.230 00:00:36.230 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.230 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:36.230 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:36.230 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.230 + rm -f /tmp/spdk-ld-path 00:00:36.230 + source autorun-spdk.conf 00:00:36.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.230 ++ SPDK_TEST_NVMF=1 00:00:36.230 ++ SPDK_TEST_NVME_CLI=1 00:00:36.230 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.230 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.230 ++ SPDK_TEST_VFIOUSER=1 00:00:36.230 ++ SPDK_RUN_UBSAN=1 00:00:36.230 ++ NET_TYPE=phy 00:00:36.230 ++ RUN_NIGHTLY=0 00:00:36.230 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.230 + [[ -n '' ]] 00:00:36.230 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.230 + for M in /var/spdk/build-*-manifest.txt 00:00:36.230 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.230 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.230 + for M in /var/spdk/build-*-manifest.txt 00:00:36.230 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.230 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.488 ++ uname 00:00:36.488 + [[ Linux == \L\i\n\u\x ]] 00:00:36.488 + sudo dmesg -T 00:00:36.488 + sudo dmesg --clear 00:00:36.488 + dmesg_pid=3518987 00:00:36.488 + [[ Fedora Linux == FreeBSD ]] 00:00:36.488 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.488 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.488 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.488 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.488 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.488 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.488 + sudo dmesg -Tw 00:00:36.488 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.488 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.488 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.488 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.488 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.488 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.488 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.488 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.488 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.488 Test configuration: 00:00:36.488 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.488 SPDK_TEST_NVMF=1 00:00:36.488 SPDK_TEST_NVME_CLI=1 00:00:36.488 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.488 SPDK_TEST_NVMF_NICS=e810 00:00:36.488 SPDK_TEST_VFIOUSER=1 00:00:36.488 SPDK_RUN_UBSAN=1 00:00:36.489 NET_TYPE=phy 00:00:36.489 RUN_NIGHTLY=0 03:51:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:36.489 03:51:50 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:36.489 03:51:50 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:36.489 03:51:50 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:36.489 03:51:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.489 03:51:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.489 03:51:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.489 03:51:50 -- paths/export.sh@5 -- $ export PATH 00:00:36.489 03:51:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:36.489 03:51:50 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:36.489 03:51:50 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:36.489 03:51:50 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713491510.XXXXXX 00:00:36.489 03:51:50 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713491510.SkyN14 00:00:36.489 03:51:50 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:36.489 03:51:50 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:36.489 03:51:50 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:36.489 03:51:50 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:36.489 03:51:50 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:36.489 03:51:50 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:36.489 03:51:50 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:36.489 03:51:50 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.489 03:51:50 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:36.489 03:51:50 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:36.489 03:51:50 -- pm/common@17 -- $ local monitor 00:00:36.489 03:51:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.489 03:51:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3519024 00:00:36.489 03:51:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.489 03:51:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3519026 00:00:36.489 03:51:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.489 03:51:50 -- pm/common@21 -- $ date +%s 00:00:36.489 03:51:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3519028 00:00:36.489 03:51:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:36.489 03:51:50 -- pm/common@21 -- $ date +%s 00:00:36.489 03:51:50 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3519031 00:00:36.489 03:51:50 -- pm/common@26 -- $ sleep 1 00:00:36.489 03:51:50 -- pm/common@21 -- $ date +%s 00:00:36.489 03:51:50 -- pm/common@21 -- $ date +%s 00:00:36.489 03:51:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491510 00:00:36.489 03:51:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491510 00:00:36.489 03:51:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491510 00:00:36.489 03:51:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713491510 00:00:36.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491510_collect-vmstat.pm.log 00:00:36.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491510_collect-cpu-load.pm.log 00:00:36.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491510_collect-bmc-pm.bmc.pm.log 00:00:36.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713491510_collect-cpu-temp.pm.log 00:00:37.681 03:51:51 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:37.681 03:51:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:37.681 03:51:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:37.681 03:51:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.681 03:51:51 -- spdk/autobuild.sh@16 -- $ date -u 00:00:37.681 Fri Apr 19 01:51:51 AM UTC 2024 00:00:37.681 03:51:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:37.681 v24.05-pre-415-g77a84e60e 00:00:37.681 03:51:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:37.681 03:51:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:37.681 03:51:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:37.681 03:51:52 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:37.681 03:51:52 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:37.681 03:51:52 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.940 ************************************ 00:00:37.940 START TEST ubsan 00:00:37.940 ************************************ 00:00:37.940 03:51:52 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:37.940 using ubsan 00:00:37.940 00:00:37.940 real 0m0.000s 00:00:37.940 user 0m0.000s 00:00:37.940 sys 0m0.000s 00:00:37.940 03:51:52 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:37.940 03:51:52 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.940 ************************************ 00:00:37.940 END TEST ubsan 00:00:37.940 ************************************ 00:00:37.940 03:51:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:37.940 03:51:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:37.940 03:51:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:37.940 03:51:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:37.940 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:37.940 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:38.507 Using 'verbs' RDMA provider 00:00:51.321 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:06.200 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:06.200 Creating mk/config.mk...done. 00:01:06.200 Creating mk/cc.flags.mk...done. 00:01:06.200 Type 'make' to build. 00:01:06.200 03:52:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:06.200 03:52:18 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:06.200 03:52:18 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:06.200 03:52:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.200 ************************************ 00:01:06.200 START TEST make 00:01:06.200 ************************************ 00:01:06.200 03:52:18 -- common/autotest_common.sh@1111 -- $ make -j112 00:01:06.200 make[1]: Nothing to be done for 'all'. 00:01:06.200 The Meson build system 00:01:06.200 Version: 1.3.1 00:01:06.200 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:06.200 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:06.200 Build type: native build 00:01:06.200 Project name: libvfio-user 00:01:06.200 Project version: 0.0.1 00:01:06.200 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.200 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.200 Host machine cpu family: x86_64 00:01:06.200 Host machine cpu: x86_64 00:01:06.200 Run-time dependency threads found: YES 00:01:06.200 Library dl found: YES 00:01:06.200 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.200 Run-time dependency json-c found: YES 0.17 00:01:06.200 Run-time dependency cmocka found: YES 1.1.7 00:01:06.200 Program pytest-3 found: NO 00:01:06.200 Program flake8 found: NO 00:01:06.200 Program misspell-fixer found: NO 00:01:06.200 Program restructuredtext-lint found: NO 00:01:06.200 Program valgrind found: YES (/usr/bin/valgrind) 00:01:06.200 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.200 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.200 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.200 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.200 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:06.200 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:06.200 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:06.200 Build targets in project: 8 00:01:06.200 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:06.200 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:06.200 00:01:06.200 libvfio-user 0.0.1 00:01:06.200 00:01:06.200 User defined options 00:01:06.200 buildtype : debug 00:01:06.200 default_library: shared 00:01:06.200 libdir : /usr/local/lib 00:01:06.200 00:01:06.201 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.767 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:06.767 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:06.767 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:06.767 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:06.767 [4/37] Compiling C object samples/null.p/null.c.o 00:01:06.767 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:06.767 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:06.767 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:06.767 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:06.767 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:06.767 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:06.767 [11/37] Compiling C object samples/server.p/server.c.o 00:01:06.767 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:06.767 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:06.767 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:06.767 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:06.767 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:06.767 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:06.767 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:06.767 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:06.767 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:06.767 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:06.767 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:06.767 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:06.767 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:06.767 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:06.767 [26/37] Compiling C object samples/client.p/client.c.o 00:01:07.024 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:07.024 [28/37] Linking target samples/client 00:01:07.024 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:07.024 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:07.024 [31/37] Linking target test/unit_tests 00:01:07.024 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:07.281 [33/37] Linking target samples/server 00:01:07.281 [34/37] Linking target samples/null 00:01:07.281 [35/37] Linking target samples/gpio-pci-idio-16 00:01:07.281 [36/37] Linking target samples/lspci 00:01:07.281 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:07.281 INFO: autodetecting backend as ninja 00:01:07.281 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.282 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:07.538 ninja: no work to do. 00:01:12.817 The Meson build system 00:01:12.817 Version: 1.3.1 00:01:12.817 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:12.817 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:12.817 Build type: native build 00:01:12.817 Program cat found: YES (/usr/bin/cat) 00:01:12.817 Project name: DPDK 00:01:12.817 Project version: 23.11.0 00:01:12.817 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.817 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.817 Host machine cpu family: x86_64 00:01:12.817 Host machine cpu: x86_64 00:01:12.817 Message: ## Building in Developer Mode ## 00:01:12.817 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:12.817 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:12.817 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:12.817 Program python3 found: YES (/usr/bin/python3) 00:01:12.817 Program cat found: YES (/usr/bin/cat) 00:01:12.817 Compiler for C supports arguments -march=native: YES 00:01:12.817 Checking for size of "void *" : 8 00:01:12.817 Checking for size of "void *" : 8 (cached) 00:01:12.817 Library m found: YES 00:01:12.817 Library numa found: YES 00:01:12.817 Has header "numaif.h" : YES 00:01:12.817 Library fdt found: NO 00:01:12.817 Library execinfo found: NO 00:01:12.817 Has header "execinfo.h" : YES 00:01:12.817 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.817 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:12.817 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:12.817 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:12.817 Run-time dependency openssl found: YES 3.0.9 00:01:12.817 Run-time dependency libpcap found: YES 1.10.4 00:01:12.817 Has header "pcap.h" with dependency libpcap: YES 00:01:12.817 Compiler for C supports arguments -Wcast-qual: YES 00:01:12.817 Compiler for C supports arguments -Wdeprecated: YES 00:01:12.817 Compiler for C supports arguments -Wformat: YES 00:01:12.817 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:12.817 Compiler for C supports arguments -Wformat-security: NO 00:01:12.817 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.817 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:12.817 Compiler for C supports arguments -Wnested-externs: YES 00:01:12.817 Compiler for C supports arguments -Wold-style-definition: YES 00:01:12.817 Compiler for C supports arguments -Wpointer-arith: YES 00:01:12.817 Compiler for C supports arguments -Wsign-compare: YES 00:01:12.817 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:12.817 Compiler for C supports arguments -Wundef: YES 00:01:12.817 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.817 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:12.817 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:12.817 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.817 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:12.817 Program objdump found: YES (/usr/bin/objdump) 00:01:12.817 Compiler for C supports arguments -mavx512f: YES 00:01:12.817 Checking if "AVX512 checking" compiles: YES 00:01:12.817 Fetching value of define "__SSE4_2__" : 1 00:01:12.817 Fetching value of define "__AES__" : 1 00:01:12.817 Fetching value of define "__AVX__" : 1 00:01:12.817 Fetching value of define "__AVX2__" : 1 00:01:12.817 Fetching value of define "__AVX512BW__" : 1 00:01:12.817 Fetching value of define "__AVX512CD__" : 1 00:01:12.817 Fetching value of define "__AVX512DQ__" : 1 00:01:12.817 Fetching value of define "__AVX512F__" : 1 00:01:12.817 Fetching value of define "__AVX512VL__" : 1 00:01:12.817 Fetching value of define "__PCLMUL__" : 1 00:01:12.817 Fetching value of define "__RDRND__" : 1 00:01:12.817 Fetching value of define "__RDSEED__" : 1 00:01:12.817 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:12.817 Fetching value of define "__znver1__" : (undefined) 00:01:12.817 Fetching value of define "__znver2__" : (undefined) 00:01:12.817 Fetching value of define "__znver3__" : (undefined) 00:01:12.817 Fetching value of define "__znver4__" : (undefined) 00:01:12.817 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:12.817 Message: lib/log: Defining dependency "log" 00:01:12.817 Message: lib/kvargs: Defining dependency "kvargs" 00:01:12.817 Message: lib/telemetry: Defining dependency "telemetry" 00:01:12.817 Checking for function "getentropy" : NO 00:01:12.817 Message: lib/eal: Defining dependency "eal" 00:01:12.817 Message: lib/ring: Defining dependency "ring" 00:01:12.817 Message: lib/rcu: Defining dependency "rcu" 00:01:12.817 Message: lib/mempool: Defining dependency "mempool" 00:01:12.817 Message: lib/mbuf: Defining dependency "mbuf" 00:01:12.817 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:12.817 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:12.817 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:12.817 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:12.817 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:12.817 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:12.817 Compiler for C supports arguments -mpclmul: YES 00:01:12.817 Compiler for C supports arguments -maes: YES 00:01:12.817 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:12.817 Compiler for C supports arguments -mavx512bw: YES 00:01:12.817 Compiler for C supports arguments -mavx512dq: YES 00:01:12.817 Compiler for C supports arguments -mavx512vl: YES 00:01:12.817 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:12.817 Compiler for C supports arguments -mavx2: YES 00:01:12.817 Compiler for C supports arguments -mavx: YES 00:01:12.817 Message: lib/net: Defining dependency "net" 00:01:12.817 Message: lib/meter: Defining dependency "meter" 00:01:12.817 Message: lib/ethdev: Defining dependency "ethdev" 00:01:12.817 Message: lib/pci: Defining dependency "pci" 00:01:12.817 Message: lib/cmdline: Defining dependency "cmdline" 00:01:12.817 Message: lib/hash: Defining dependency "hash" 00:01:12.817 Message: lib/timer: Defining dependency "timer" 00:01:12.817 Message: lib/compressdev: Defining dependency "compressdev" 00:01:12.817 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:12.817 Message: lib/dmadev: Defining dependency "dmadev" 00:01:12.817 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:12.817 Message: lib/power: Defining dependency "power" 00:01:12.817 Message: lib/reorder: Defining dependency "reorder" 00:01:12.817 Message: lib/security: Defining dependency "security" 00:01:12.817 Has header "linux/userfaultfd.h" : YES 00:01:12.817 Has header "linux/vduse.h" : YES 00:01:12.817 Message: lib/vhost: Defining dependency "vhost" 00:01:12.817 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:12.817 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:12.817 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:12.817 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:12.817 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:12.817 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:12.817 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:12.817 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:12.817 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:12.817 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:12.817 Program doxygen found: YES (/usr/bin/doxygen) 00:01:12.817 Configuring doxy-api-html.conf using configuration 00:01:12.817 Configuring doxy-api-man.conf using configuration 00:01:12.817 Program mandb found: YES (/usr/bin/mandb) 00:01:12.817 Program sphinx-build found: NO 00:01:12.817 Configuring rte_build_config.h using configuration 00:01:12.817 Message: 00:01:12.817 ================= 00:01:12.817 Applications Enabled 00:01:12.817 ================= 00:01:12.817 00:01:12.817 apps: 00:01:12.817 00:01:12.817 00:01:12.817 Message: 00:01:12.817 ================= 00:01:12.817 Libraries Enabled 00:01:12.817 ================= 00:01:12.817 00:01:12.817 libs: 00:01:12.817 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:12.817 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:12.817 cryptodev, dmadev, power, reorder, security, vhost, 00:01:12.817 00:01:12.817 Message: 00:01:12.817 =============== 00:01:12.817 Drivers Enabled 00:01:12.817 =============== 00:01:12.817 00:01:12.817 common: 00:01:12.817 00:01:12.817 bus: 00:01:12.817 pci, vdev, 00:01:12.817 mempool: 00:01:12.817 ring, 00:01:12.817 dma: 00:01:12.817 00:01:12.817 net: 00:01:12.817 00:01:12.817 crypto: 00:01:12.817 00:01:12.817 compress: 00:01:12.817 00:01:12.817 vdpa: 00:01:12.817 00:01:12.817 00:01:12.817 Message: 00:01:12.817 ================= 00:01:12.817 Content Skipped 00:01:12.817 ================= 00:01:12.817 00:01:12.817 apps: 00:01:12.817 dumpcap: explicitly disabled via build config 00:01:12.817 graph: explicitly disabled via build config 00:01:12.817 pdump: explicitly disabled via build config 00:01:12.817 proc-info: explicitly disabled via build config 00:01:12.817 test-acl: explicitly disabled via build config 00:01:12.818 test-bbdev: explicitly disabled via build config 00:01:12.818 test-cmdline: explicitly disabled via build config 00:01:12.818 test-compress-perf: explicitly disabled via build config 00:01:12.818 test-crypto-perf: explicitly disabled via build config 00:01:12.818 test-dma-perf: explicitly disabled via build config 00:01:12.818 test-eventdev: explicitly disabled via build config 00:01:12.818 test-fib: explicitly disabled via build config 00:01:12.818 test-flow-perf: explicitly disabled via build config 00:01:12.818 test-gpudev: explicitly disabled via build config 00:01:12.818 test-mldev: explicitly disabled via build config 00:01:12.818 test-pipeline: explicitly disabled via build config 00:01:12.818 test-pmd: explicitly disabled via build config 00:01:12.818 test-regex: explicitly disabled via build config 00:01:12.818 test-sad: explicitly disabled via build config 00:01:12.818 test-security-perf: explicitly disabled via build config 00:01:12.818 00:01:12.818 libs: 00:01:12.818 metrics: explicitly disabled via build config 00:01:12.818 acl: explicitly disabled via build config 00:01:12.818 bbdev: explicitly disabled via build config 00:01:12.818 bitratestats: explicitly disabled via build config 00:01:12.818 bpf: explicitly disabled via build config 00:01:12.818 cfgfile: explicitly disabled via build config 00:01:12.818 distributor: explicitly disabled via build config 00:01:12.818 efd: explicitly disabled via build config 00:01:12.818 eventdev: explicitly disabled via build config 00:01:12.818 dispatcher: explicitly disabled via build config 00:01:12.818 gpudev: explicitly disabled via build config 00:01:12.818 gro: explicitly disabled via build config 00:01:12.818 gso: explicitly disabled via build config 00:01:12.818 ip_frag: explicitly disabled via build config 00:01:12.818 jobstats: explicitly disabled via build config 00:01:12.818 latencystats: explicitly disabled via build config 00:01:12.818 lpm: explicitly disabled via build config 00:01:12.818 member: explicitly disabled via build config 00:01:12.818 pcapng: explicitly disabled via build config 00:01:12.818 rawdev: explicitly disabled via build config 00:01:12.818 regexdev: explicitly disabled via build config 00:01:12.818 mldev: explicitly disabled via build config 00:01:12.818 rib: explicitly disabled via build config 00:01:12.818 sched: explicitly disabled via build config 00:01:12.818 stack: explicitly disabled via build config 00:01:12.818 ipsec: explicitly disabled via build config 00:01:12.818 pdcp: explicitly disabled via build config 00:01:12.818 fib: explicitly disabled via build config 00:01:12.818 port: explicitly disabled via build config 00:01:12.818 pdump: explicitly disabled via build config 00:01:12.818 table: explicitly disabled via build config 00:01:12.818 pipeline: explicitly disabled via build config 00:01:12.818 graph: explicitly disabled via build config 00:01:12.818 node: explicitly disabled via build config 00:01:12.818 00:01:12.818 drivers: 00:01:12.818 common/cpt: not in enabled drivers build config 00:01:12.818 common/dpaax: not in enabled drivers build config 00:01:12.818 common/iavf: not in enabled drivers build config 00:01:12.818 common/idpf: not in enabled drivers build config 00:01:12.818 common/mvep: not in enabled drivers build config 00:01:12.818 common/octeontx: not in enabled drivers build config 00:01:12.818 bus/auxiliary: not in enabled drivers build config 00:01:12.818 bus/cdx: not in enabled drivers build config 00:01:12.818 bus/dpaa: not in enabled drivers build config 00:01:12.818 bus/fslmc: not in enabled drivers build config 00:01:12.818 bus/ifpga: not in enabled drivers build config 00:01:12.818 bus/platform: not in enabled drivers build config 00:01:12.818 bus/vmbus: not in enabled drivers build config 00:01:12.818 common/cnxk: not in enabled drivers build config 00:01:12.818 common/mlx5: not in enabled drivers build config 00:01:12.818 common/nfp: not in enabled drivers build config 00:01:12.818 common/qat: not in enabled drivers build config 00:01:12.818 common/sfc_efx: not in enabled drivers build config 00:01:12.818 mempool/bucket: not in enabled drivers build config 00:01:12.818 mempool/cnxk: not in enabled drivers build config 00:01:12.818 mempool/dpaa: not in enabled drivers build config 00:01:12.818 mempool/dpaa2: not in enabled drivers build config 00:01:12.818 mempool/octeontx: not in enabled drivers build config 00:01:12.818 mempool/stack: not in enabled drivers build config 00:01:12.818 dma/cnxk: not in enabled drivers build config 00:01:12.818 dma/dpaa: not in enabled drivers build config 00:01:12.818 dma/dpaa2: not in enabled drivers build config 00:01:12.818 dma/hisilicon: not in enabled drivers build config 00:01:12.818 dma/idxd: not in enabled drivers build config 00:01:12.818 dma/ioat: not in enabled drivers build config 00:01:12.818 dma/skeleton: not in enabled drivers build config 00:01:12.818 net/af_packet: not in enabled drivers build config 00:01:12.818 net/af_xdp: not in enabled drivers build config 00:01:12.818 net/ark: not in enabled drivers build config 00:01:12.818 net/atlantic: not in enabled drivers build config 00:01:12.818 net/avp: not in enabled drivers build config 00:01:12.818 net/axgbe: not in enabled drivers build config 00:01:12.818 net/bnx2x: not in enabled drivers build config 00:01:12.818 net/bnxt: not in enabled drivers build config 00:01:12.818 net/bonding: not in enabled drivers build config 00:01:12.818 net/cnxk: not in enabled drivers build config 00:01:12.818 net/cpfl: not in enabled drivers build config 00:01:12.818 net/cxgbe: not in enabled drivers build config 00:01:12.818 net/dpaa: not in enabled drivers build config 00:01:12.818 net/dpaa2: not in enabled drivers build config 00:01:12.818 net/e1000: not in enabled drivers build config 00:01:12.818 net/ena: not in enabled drivers build config 00:01:12.818 net/enetc: not in enabled drivers build config 00:01:12.818 net/enetfec: not in enabled drivers build config 00:01:12.818 net/enic: not in enabled drivers build config 00:01:12.818 net/failsafe: not in enabled drivers build config 00:01:12.818 net/fm10k: not in enabled drivers build config 00:01:12.818 net/gve: not in enabled drivers build config 00:01:12.818 net/hinic: not in enabled drivers build config 00:01:12.818 net/hns3: not in enabled drivers build config 00:01:12.818 net/i40e: not in enabled drivers build config 00:01:12.818 net/iavf: not in enabled drivers build config 00:01:12.818 net/ice: not in enabled drivers build config 00:01:12.818 net/idpf: not in enabled drivers build config 00:01:12.818 net/igc: not in enabled drivers build config 00:01:12.818 net/ionic: not in enabled drivers build config 00:01:12.818 net/ipn3ke: not in enabled drivers build config 00:01:12.818 net/ixgbe: not in enabled drivers build config 00:01:12.818 net/mana: not in enabled drivers build config 00:01:12.818 net/memif: not in enabled drivers build config 00:01:12.818 net/mlx4: not in enabled drivers build config 00:01:12.818 net/mlx5: not in enabled drivers build config 00:01:12.818 net/mvneta: not in enabled drivers build config 00:01:12.818 net/mvpp2: not in enabled drivers build config 00:01:12.818 net/netvsc: not in enabled drivers build config 00:01:12.818 net/nfb: not in enabled drivers build config 00:01:12.818 net/nfp: not in enabled drivers build config 00:01:12.818 net/ngbe: not in enabled drivers build config 00:01:12.818 net/null: not in enabled drivers build config 00:01:12.818 net/octeontx: not in enabled drivers build config 00:01:12.818 net/octeon_ep: not in enabled drivers build config 00:01:12.818 net/pcap: not in enabled drivers build config 00:01:12.818 net/pfe: not in enabled drivers build config 00:01:12.818 net/qede: not in enabled drivers build config 00:01:12.818 net/ring: not in enabled drivers build config 00:01:12.818 net/sfc: not in enabled drivers build config 00:01:12.818 net/softnic: not in enabled drivers build config 00:01:12.818 net/tap: not in enabled drivers build config 00:01:12.818 net/thunderx: not in enabled drivers build config 00:01:12.818 net/txgbe: not in enabled drivers build config 00:01:12.818 net/vdev_netvsc: not in enabled drivers build config 00:01:12.818 net/vhost: not in enabled drivers build config 00:01:12.818 net/virtio: not in enabled drivers build config 00:01:12.818 net/vmxnet3: not in enabled drivers build config 00:01:12.818 raw/*: missing internal dependency, "rawdev" 00:01:12.818 crypto/armv8: not in enabled drivers build config 00:01:12.818 crypto/bcmfs: not in enabled drivers build config 00:01:12.818 crypto/caam_jr: not in enabled drivers build config 00:01:12.818 crypto/ccp: not in enabled drivers build config 00:01:12.818 crypto/cnxk: not in enabled drivers build config 00:01:12.818 crypto/dpaa_sec: not in enabled drivers build config 00:01:12.818 crypto/dpaa2_sec: not in enabled drivers build config 00:01:12.818 crypto/ipsec_mb: not in enabled drivers build config 00:01:12.818 crypto/mlx5: not in enabled drivers build config 00:01:12.818 crypto/mvsam: not in enabled drivers build config 00:01:12.818 crypto/nitrox: not in enabled drivers build config 00:01:12.818 crypto/null: not in enabled drivers build config 00:01:12.818 crypto/octeontx: not in enabled drivers build config 00:01:12.818 crypto/openssl: not in enabled drivers build config 00:01:12.818 crypto/scheduler: not in enabled drivers build config 00:01:12.818 crypto/uadk: not in enabled drivers build config 00:01:12.818 crypto/virtio: not in enabled drivers build config 00:01:12.818 compress/isal: not in enabled drivers build config 00:01:12.818 compress/mlx5: not in enabled drivers build config 00:01:12.818 compress/octeontx: not in enabled drivers build config 00:01:12.818 compress/zlib: not in enabled drivers build config 00:01:12.818 regex/*: missing internal dependency, "regexdev" 00:01:12.818 ml/*: missing internal dependency, "mldev" 00:01:12.818 vdpa/ifc: not in enabled drivers build config 00:01:12.818 vdpa/mlx5: not in enabled drivers build config 00:01:12.818 vdpa/nfp: not in enabled drivers build config 00:01:12.818 vdpa/sfc: not in enabled drivers build config 00:01:12.818 event/*: missing internal dependency, "eventdev" 00:01:12.818 baseband/*: missing internal dependency, "bbdev" 00:01:12.818 gpu/*: missing internal dependency, "gpudev" 00:01:12.818 00:01:12.818 00:01:13.078 Build targets in project: 85 00:01:13.078 00:01:13.078 DPDK 23.11.0 00:01:13.078 00:01:13.078 User defined options 00:01:13.078 buildtype : debug 00:01:13.078 default_library : shared 00:01:13.078 libdir : lib 00:01:13.078 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.078 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:13.078 c_link_args : 00:01:13.078 cpu_instruction_set: native 00:01:13.078 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:13.078 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:13.078 enable_docs : false 00:01:13.078 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:13.078 enable_kmods : false 00:01:13.078 tests : false 00:01:13.078 00:01:13.078 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:13.656 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:13.656 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:13.656 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:13.656 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:13.656 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:13.656 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:13.656 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:13.917 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:13.917 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:13.917 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:13.917 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:13.917 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:13.917 [12/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:13.917 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:13.917 [14/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:13.917 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:13.917 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:13.917 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:13.917 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:13.917 [19/265] Linking static target lib/librte_kvargs.a 00:01:13.917 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:13.917 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:13.917 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:13.917 [23/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:13.917 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:13.917 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:13.917 [26/265] Linking static target lib/librte_log.a 00:01:13.917 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:13.917 [28/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:13.917 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:13.917 [30/265] Linking static target lib/librte_pci.a 00:01:13.917 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:13.917 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:13.917 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:13.917 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:13.917 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:13.917 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:13.917 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:13.917 [38/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:14.177 [39/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:14.177 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:14.177 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:14.177 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:14.436 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.436 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:14.436 [45/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:14.436 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:14.436 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:14.436 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:14.436 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:14.436 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:14.436 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:14.436 [52/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.436 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:14.436 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:14.436 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:14.436 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:14.436 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:14.436 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:14.436 [59/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:14.436 [60/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:14.436 [61/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:14.436 [62/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:14.436 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:14.436 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:14.436 [65/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:14.436 [66/265] Linking static target lib/librte_meter.a 00:01:14.436 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:14.436 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:14.436 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:14.436 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:14.436 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:14.436 [72/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:14.436 [73/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:14.436 [74/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:14.436 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:14.436 [76/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:14.436 [77/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:14.436 [78/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:14.436 [79/265] Linking static target lib/librte_ring.a 00:01:14.436 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:14.436 [81/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:14.436 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:14.436 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:14.436 [84/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:14.436 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:14.436 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:14.436 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:14.436 [88/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:14.436 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:14.436 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:14.436 [91/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:14.436 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:14.436 [93/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:14.436 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:14.436 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:14.436 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:14.436 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:14.436 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:14.436 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:14.436 [100/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:14.436 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:14.436 [102/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:14.436 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:14.436 [104/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:14.436 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:14.436 [106/265] Linking static target lib/librte_telemetry.a 00:01:14.436 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:14.436 [108/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:14.436 [109/265] Linking static target lib/librte_cmdline.a 00:01:14.436 [110/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:14.436 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:14.436 [112/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:14.436 [113/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:14.436 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:14.436 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:14.436 [116/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:14.436 [117/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:14.436 [118/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:14.695 [119/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:14.695 [120/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:14.695 [121/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:14.695 [122/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:14.695 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:14.695 [124/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:14.695 [125/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:14.695 [126/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:14.695 [127/265] Linking static target lib/librte_timer.a 00:01:14.695 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:14.695 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:14.695 [130/265] Linking static target lib/librte_compressdev.a 00:01:14.695 [131/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:14.695 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:14.695 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:14.695 [134/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:14.695 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:14.695 [136/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:14.695 [137/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:14.695 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:14.695 [139/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:14.695 [140/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.695 [141/265] Linking static target lib/librte_net.a 00:01:14.695 [142/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:14.695 [143/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:14.695 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:14.695 [145/265] Linking static target lib/librte_eal.a 00:01:14.695 [146/265] Linking target lib/librte_log.so.24.0 00:01:14.695 [147/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:14.695 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:14.695 [149/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:14.695 [150/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:14.695 [151/265] Linking static target lib/librte_mempool.a 00:01:14.695 [152/265] Linking static target lib/librte_rcu.a 00:01:14.695 [153/265] Linking static target lib/librte_dmadev.a 00:01:14.695 [154/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:14.695 [155/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.695 [156/265] Linking static target lib/librte_power.a 00:01:14.695 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:14.695 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:14.695 [159/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:14.695 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:14.695 [161/265] Linking static target lib/librte_reorder.a 00:01:14.695 [162/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:14.695 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:14.695 [164/265] Linking static target lib/librte_mbuf.a 00:01:14.695 [165/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.695 [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:14.695 [167/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:14.695 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:14.695 [169/265] Linking static target lib/librte_security.a 00:01:14.695 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:14.695 [171/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:14.953 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:14.953 [173/265] Linking target lib/librte_kvargs.so.24.0 00:01:14.953 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:14.953 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:14.953 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:14.953 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:14.953 [178/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:14.953 [179/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:14.953 [180/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:14.953 [182/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:14.953 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:14.953 [184/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:14.953 [185/265] Linking static target lib/librte_hash.a 00:01:14.953 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:14.953 [187/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:14.953 [188/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:14.953 [189/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [190/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.953 [191/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.212 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:15.212 [193/265] Linking target lib/librte_telemetry.so.24.0 00:01:15.212 [194/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:15.212 [195/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:15.212 [196/265] Linking static target lib/librte_cryptodev.a 00:01:15.212 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:15.212 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:15.212 [199/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:15.212 [200/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.212 [201/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:15.212 [202/265] Linking static target drivers/librte_bus_pci.a 00:01:15.212 [203/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.212 [204/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:15.212 [205/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:15.212 [206/265] Linking static target drivers/librte_bus_vdev.a 00:01:15.212 [207/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:15.212 [208/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:15.212 [209/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.212 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:15.212 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:15.212 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:15.212 [213/265] Linking static target drivers/librte_mempool_ring.a 00:01:15.469 [214/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.469 [215/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.469 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.469 [217/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.469 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.469 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.727 [220/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.727 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.727 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:15.727 [223/265] Linking static target lib/librte_ethdev.a 00:01:15.985 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.919 [225/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.486 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.486 [227/265] Linking static target lib/librte_vhost.a 00:01:19.389 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.582 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.582 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.582 [231/265] Linking target lib/librte_eal.so.24.0 00:01:24.841 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:24.841 [233/265] Linking target lib/librte_ring.so.24.0 00:01:24.841 [234/265] Linking target lib/librte_meter.so.24.0 00:01:24.841 [235/265] Linking target lib/librte_pci.so.24.0 00:01:24.841 [236/265] Linking target lib/librte_timer.so.24.0 00:01:24.841 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:24.841 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:24.841 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:24.841 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:24.841 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:24.841 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:24.841 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:25.100 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:25.100 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:25.100 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:25.100 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:25.100 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:25.100 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:25.100 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:25.359 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:25.359 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:25.359 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:25.359 [254/265] Linking target lib/librte_net.so.24.0 00:01:25.359 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:25.620 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:25.620 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:25.620 [258/265] Linking target lib/librte_security.so.24.0 00:01:25.620 [259/265] Linking target lib/librte_hash.so.24.0 00:01:25.620 [260/265] Linking target lib/librte_cmdline.so.24.0 00:01:25.620 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:25.879 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:25.879 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:25.879 [264/265] Linking target lib/librte_power.so.24.0 00:01:25.879 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:25.879 INFO: autodetecting backend as ninja 00:01:25.879 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:27.255 CC lib/log/log.o 00:01:27.255 CC lib/log/log_flags.o 00:01:27.255 CC lib/log/log_deprecated.o 00:01:27.255 CC lib/ut/ut.o 00:01:27.255 CC lib/ut_mock/mock.o 00:01:27.255 LIB libspdk_ut_mock.a 00:01:27.255 LIB libspdk_ut.a 00:01:27.255 LIB libspdk_log.a 00:01:27.255 SO libspdk_ut_mock.so.6.0 00:01:27.255 SO libspdk_ut.so.2.0 00:01:27.255 SO libspdk_log.so.7.0 00:01:27.255 SYMLINK libspdk_ut_mock.so 00:01:27.255 SYMLINK libspdk_ut.so 00:01:27.255 SYMLINK libspdk_log.so 00:01:27.513 CC lib/ioat/ioat.o 00:01:27.513 CC lib/util/base64.o 00:01:27.513 CC lib/util/bit_array.o 00:01:27.513 CXX lib/trace_parser/trace.o 00:01:27.513 CC lib/util/cpuset.o 00:01:27.513 CC lib/util/crc16.o 00:01:27.513 CC lib/util/crc32.o 00:01:27.513 CC lib/util/crc32c.o 00:01:27.513 CC lib/util/crc32_ieee.o 00:01:27.513 CC lib/util/crc64.o 00:01:27.513 CC lib/dma/dma.o 00:01:27.513 CC lib/util/dif.o 00:01:27.513 CC lib/util/fd.o 00:01:27.513 CC lib/util/file.o 00:01:27.513 CC lib/util/hexlify.o 00:01:27.513 CC lib/util/iov.o 00:01:27.513 CC lib/util/math.o 00:01:27.513 CC lib/util/pipe.o 00:01:27.513 CC lib/util/strerror_tls.o 00:01:27.513 CC lib/util/string.o 00:01:27.513 CC lib/util/uuid.o 00:01:27.513 CC lib/util/fd_group.o 00:01:27.513 CC lib/util/xor.o 00:01:27.513 CC lib/util/zipf.o 00:01:27.771 CC lib/vfio_user/host/vfio_user.o 00:01:27.771 CC lib/vfio_user/host/vfio_user_pci.o 00:01:27.771 LIB libspdk_dma.a 00:01:28.029 SO libspdk_dma.so.4.0 00:01:28.029 LIB libspdk_ioat.a 00:01:28.029 SO libspdk_ioat.so.7.0 00:01:28.029 SYMLINK libspdk_dma.so 00:01:28.029 SYMLINK libspdk_ioat.so 00:01:28.029 LIB libspdk_util.a 00:01:28.029 LIB libspdk_vfio_user.a 00:01:28.029 SO libspdk_vfio_user.so.5.0 00:01:28.029 SO libspdk_util.so.9.0 00:01:28.294 SYMLINK libspdk_vfio_user.so 00:01:28.294 SYMLINK libspdk_util.so 00:01:28.559 LIB libspdk_trace_parser.a 00:01:28.559 CC lib/conf/conf.o 00:01:28.559 CC lib/vmd/vmd.o 00:01:28.559 CC lib/env_dpdk/env.o 00:01:28.559 CC lib/vmd/led.o 00:01:28.559 CC lib/env_dpdk/memory.o 00:01:28.559 CC lib/env_dpdk/pci.o 00:01:28.559 CC lib/env_dpdk/init.o 00:01:28.559 CC lib/json/json_parse.o 00:01:28.559 CC lib/env_dpdk/threads.o 00:01:28.559 CC lib/json/json_util.o 00:01:28.559 CC lib/env_dpdk/pci_ioat.o 00:01:28.559 CC lib/json/json_write.o 00:01:28.559 CC lib/env_dpdk/pci_virtio.o 00:01:28.559 CC lib/env_dpdk/pci_vmd.o 00:01:28.559 CC lib/env_dpdk/pci_idxd.o 00:01:28.559 CC lib/env_dpdk/pci_event.o 00:01:28.559 CC lib/idxd/idxd.o 00:01:28.559 CC lib/env_dpdk/sigbus_handler.o 00:01:28.559 CC lib/rdma/common.o 00:01:28.559 CC lib/env_dpdk/pci_dpdk.o 00:01:28.559 CC lib/idxd/idxd_user.o 00:01:28.559 CC lib/rdma/rdma_verbs.o 00:01:28.559 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:28.559 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:28.559 SO libspdk_trace_parser.so.5.0 00:01:28.817 SYMLINK libspdk_trace_parser.so 00:01:28.817 LIB libspdk_conf.a 00:01:28.817 SO libspdk_conf.so.6.0 00:01:28.817 LIB libspdk_json.a 00:01:28.817 LIB libspdk_rdma.a 00:01:29.075 SYMLINK libspdk_conf.so 00:01:29.075 SO libspdk_json.so.6.0 00:01:29.075 SO libspdk_rdma.so.6.0 00:01:29.076 SYMLINK libspdk_rdma.so 00:01:29.076 SYMLINK libspdk_json.so 00:01:29.076 LIB libspdk_vmd.a 00:01:29.076 SO libspdk_vmd.so.6.0 00:01:29.076 SYMLINK libspdk_vmd.so 00:01:29.076 LIB libspdk_idxd.a 00:01:29.334 SO libspdk_idxd.so.12.0 00:01:29.334 SYMLINK libspdk_idxd.so 00:01:29.334 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.334 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.334 CC lib/jsonrpc/jsonrpc_client.o 00:01:29.334 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:29.592 LIB libspdk_jsonrpc.a 00:01:29.593 SO libspdk_jsonrpc.so.6.0 00:01:29.851 SYMLINK libspdk_jsonrpc.so 00:01:30.108 LIB libspdk_env_dpdk.a 00:01:30.108 CC lib/rpc/rpc.o 00:01:30.108 SO libspdk_env_dpdk.so.14.0 00:01:30.108 SYMLINK libspdk_env_dpdk.so 00:01:30.365 LIB libspdk_rpc.a 00:01:30.365 SO libspdk_rpc.so.6.0 00:01:30.365 SYMLINK libspdk_rpc.so 00:01:30.623 CC lib/keyring/keyring.o 00:01:30.623 CC lib/trace/trace.o 00:01:30.623 CC lib/keyring/keyring_rpc.o 00:01:30.623 CC lib/trace/trace_flags.o 00:01:30.623 CC lib/trace/trace_rpc.o 00:01:30.623 CC lib/notify/notify.o 00:01:30.623 CC lib/notify/notify_rpc.o 00:01:30.880 LIB libspdk_notify.a 00:01:30.880 SO libspdk_notify.so.6.0 00:01:30.880 LIB libspdk_keyring.a 00:01:30.880 LIB libspdk_trace.a 00:01:30.880 SYMLINK libspdk_notify.so 00:01:30.880 SO libspdk_keyring.so.1.0 00:01:30.880 SO libspdk_trace.so.10.0 00:01:31.139 SYMLINK libspdk_keyring.so 00:01:31.139 SYMLINK libspdk_trace.so 00:01:31.397 CC lib/sock/sock.o 00:01:31.397 CC lib/sock/sock_rpc.o 00:01:31.397 CC lib/thread/thread.o 00:01:31.397 CC lib/thread/iobuf.o 00:01:31.656 LIB libspdk_sock.a 00:01:31.914 SO libspdk_sock.so.9.0 00:01:31.914 SYMLINK libspdk_sock.so 00:01:32.173 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.173 CC lib/nvme/nvme_ctrlr.o 00:01:32.173 CC lib/nvme/nvme_fabric.o 00:01:32.173 CC lib/nvme/nvme_ns_cmd.o 00:01:32.173 CC lib/nvme/nvme_ns.o 00:01:32.173 CC lib/nvme/nvme_pcie_common.o 00:01:32.173 CC lib/nvme/nvme_pcie.o 00:01:32.173 CC lib/nvme/nvme_qpair.o 00:01:32.173 CC lib/nvme/nvme.o 00:01:32.173 CC lib/nvme/nvme_quirks.o 00:01:32.173 CC lib/nvme/nvme_transport.o 00:01:32.173 CC lib/nvme/nvme_discovery.o 00:01:32.173 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.173 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.173 CC lib/nvme/nvme_tcp.o 00:01:32.173 CC lib/nvme/nvme_opal.o 00:01:32.173 CC lib/nvme/nvme_io_msg.o 00:01:32.173 CC lib/nvme/nvme_poll_group.o 00:01:32.173 CC lib/nvme/nvme_zns.o 00:01:32.173 CC lib/nvme/nvme_stubs.o 00:01:32.173 CC lib/nvme/nvme_auth.o 00:01:32.173 CC lib/nvme/nvme_cuse.o 00:01:32.173 CC lib/nvme/nvme_vfio_user.o 00:01:32.173 CC lib/nvme/nvme_rdma.o 00:01:33.108 LIB libspdk_thread.a 00:01:33.108 SO libspdk_thread.so.10.0 00:01:33.108 SYMLINK libspdk_thread.so 00:01:33.364 CC lib/accel/accel_rpc.o 00:01:33.364 CC lib/accel/accel.o 00:01:33.364 CC lib/accel/accel_sw.o 00:01:33.364 CC lib/init/json_config.o 00:01:33.364 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.364 CC lib/init/subsystem.o 00:01:33.364 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.364 CC lib/init/subsystem_rpc.o 00:01:33.364 CC lib/init/rpc.o 00:01:33.364 CC lib/blob/blobstore.o 00:01:33.364 CC lib/blob/request.o 00:01:33.364 CC lib/virtio/virtio.o 00:01:33.364 CC lib/blob/zeroes.o 00:01:33.364 CC lib/blob/blob_bs_dev.o 00:01:33.364 CC lib/virtio/virtio_vhost_user.o 00:01:33.364 CC lib/virtio/virtio_vfio_user.o 00:01:33.364 CC lib/virtio/virtio_pci.o 00:01:33.620 LIB libspdk_init.a 00:01:33.620 SO libspdk_init.so.5.0 00:01:33.620 LIB libspdk_vfu_tgt.a 00:01:33.620 LIB libspdk_virtio.a 00:01:33.620 SO libspdk_vfu_tgt.so.3.0 00:01:33.620 SYMLINK libspdk_init.so 00:01:33.620 SO libspdk_virtio.so.7.0 00:01:33.884 SYMLINK libspdk_vfu_tgt.so 00:01:33.884 SYMLINK libspdk_virtio.so 00:01:33.884 CC lib/event/app.o 00:01:33.884 CC lib/event/reactor.o 00:01:33.884 CC lib/event/log_rpc.o 00:01:33.884 CC lib/event/app_rpc.o 00:01:33.884 CC lib/event/scheduler_static.o 00:01:34.449 LIB libspdk_accel.a 00:01:34.449 SO libspdk_accel.so.15.0 00:01:34.449 LIB libspdk_nvme.a 00:01:34.449 LIB libspdk_event.a 00:01:34.449 SYMLINK libspdk_accel.so 00:01:34.449 SO libspdk_event.so.13.0 00:01:34.449 SO libspdk_nvme.so.13.0 00:01:34.449 SYMLINK libspdk_event.so 00:01:34.709 CC lib/bdev/bdev.o 00:01:34.709 CC lib/bdev/bdev_rpc.o 00:01:34.709 CC lib/bdev/bdev_zone.o 00:01:34.709 CC lib/bdev/part.o 00:01:34.709 CC lib/bdev/scsi_nvme.o 00:01:34.709 SYMLINK libspdk_nvme.so 00:01:36.081 LIB libspdk_blob.a 00:01:36.081 SO libspdk_blob.so.11.0 00:01:36.339 SYMLINK libspdk_blob.so 00:01:36.598 CC lib/lvol/lvol.o 00:01:36.598 CC lib/blobfs/blobfs.o 00:01:36.598 CC lib/blobfs/tree.o 00:01:37.534 LIB libspdk_bdev.a 00:01:37.534 SO libspdk_bdev.so.15.0 00:01:37.534 LIB libspdk_blobfs.a 00:01:37.534 SO libspdk_blobfs.so.10.0 00:01:37.534 LIB libspdk_lvol.a 00:01:37.534 SYMLINK libspdk_bdev.so 00:01:37.534 SO libspdk_lvol.so.10.0 00:01:37.534 SYMLINK libspdk_blobfs.so 00:01:37.534 SYMLINK libspdk_lvol.so 00:01:37.792 CC lib/ftl/ftl_core.o 00:01:37.792 CC lib/ftl/ftl_init.o 00:01:37.792 CC lib/ftl/ftl_layout.o 00:01:37.792 CC lib/ftl/ftl_debug.o 00:01:37.792 CC lib/ftl/ftl_io.o 00:01:37.792 CC lib/ftl/ftl_sb.o 00:01:37.792 CC lib/ftl/ftl_l2p.o 00:01:37.792 CC lib/ftl/ftl_l2p_flat.o 00:01:37.792 CC lib/ftl/ftl_nv_cache.o 00:01:37.792 CC lib/ftl/ftl_band.o 00:01:37.792 CC lib/ftl/ftl_band_ops.o 00:01:37.792 CC lib/ftl/ftl_writer.o 00:01:37.792 CC lib/ftl/ftl_rq.o 00:01:37.792 CC lib/ftl/ftl_reloc.o 00:01:37.792 CC lib/ublk/ublk.o 00:01:37.792 CC lib/nvmf/ctrlr.o 00:01:37.793 CC lib/scsi/dev.o 00:01:37.793 CC lib/ftl/ftl_l2p_cache.o 00:01:37.793 CC lib/ftl/ftl_p2l.o 00:01:37.793 CC lib/nbd/nbd.o 00:01:37.793 CC lib/ublk/ublk_rpc.o 00:01:37.793 CC lib/nvmf/ctrlr_discovery.o 00:01:37.793 CC lib/scsi/lun.o 00:01:37.793 CC lib/nbd/nbd_rpc.o 00:01:37.793 CC lib/scsi/port.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt.o 00:01:37.793 CC lib/nvmf/ctrlr_bdev.o 00:01:37.793 CC lib/scsi/scsi.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:37.793 CC lib/nvmf/nvmf.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:37.793 CC lib/nvmf/subsystem.o 00:01:37.793 CC lib/nvmf/nvmf_rpc.o 00:01:37.793 CC lib/scsi/scsi_bdev.o 00:01:37.793 CC lib/nvmf/transport.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:37.793 CC lib/scsi/scsi_rpc.o 00:01:37.793 CC lib/nvmf/tcp.o 00:01:37.793 CC lib/scsi/task.o 00:01:37.793 CC lib/scsi/scsi_pr.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:37.793 CC lib/nvmf/vfio_user.o 00:01:37.793 CC lib/nvmf/rdma.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:37.793 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:37.793 CC lib/ftl/utils/ftl_md.o 00:01:37.793 CC lib/ftl/utils/ftl_conf.o 00:01:37.793 CC lib/ftl/utils/ftl_bitmap.o 00:01:37.793 CC lib/ftl/utils/ftl_property.o 00:01:37.793 CC lib/ftl/utils/ftl_mempool.o 00:01:37.793 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:37.793 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:37.793 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:37.793 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:37.793 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:37.793 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:37.793 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:37.793 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:37.793 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:37.793 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:37.793 CC lib/ftl/ftl_trace.o 00:01:37.793 CC lib/ftl/base/ftl_base_dev.o 00:01:37.793 CC lib/ftl/base/ftl_base_bdev.o 00:01:38.360 LIB libspdk_nbd.a 00:01:38.360 SO libspdk_nbd.so.7.0 00:01:38.618 LIB libspdk_scsi.a 00:01:38.618 SYMLINK libspdk_nbd.so 00:01:38.618 LIB libspdk_ublk.a 00:01:38.618 SO libspdk_scsi.so.9.0 00:01:38.618 SO libspdk_ublk.so.3.0 00:01:38.618 SYMLINK libspdk_ublk.so 00:01:38.618 SYMLINK libspdk_scsi.so 00:01:38.877 LIB libspdk_ftl.a 00:01:38.877 CC lib/vhost/vhost.o 00:01:38.877 CC lib/vhost/vhost_rpc.o 00:01:38.877 CC lib/vhost/vhost_scsi.o 00:01:38.877 CC lib/vhost/rte_vhost_user.o 00:01:38.877 CC lib/vhost/vhost_blk.o 00:01:38.877 CC lib/iscsi/conn.o 00:01:38.877 CC lib/iscsi/init_grp.o 00:01:38.877 CC lib/iscsi/iscsi.o 00:01:38.877 CC lib/iscsi/md5.o 00:01:38.877 CC lib/iscsi/param.o 00:01:38.877 CC lib/iscsi/portal_grp.o 00:01:38.877 CC lib/iscsi/tgt_node.o 00:01:38.877 CC lib/iscsi/iscsi_subsystem.o 00:01:38.877 CC lib/iscsi/iscsi_rpc.o 00:01:38.877 CC lib/iscsi/task.o 00:01:39.135 SO libspdk_ftl.so.9.0 00:01:39.394 SYMLINK libspdk_ftl.so 00:01:39.961 LIB libspdk_nvmf.a 00:01:40.219 LIB libspdk_vhost.a 00:01:40.219 SO libspdk_nvmf.so.18.0 00:01:40.219 SO libspdk_vhost.so.8.0 00:01:40.220 SYMLINK libspdk_vhost.so 00:01:40.220 SYMLINK libspdk_nvmf.so 00:01:40.479 LIB libspdk_iscsi.a 00:01:40.479 SO libspdk_iscsi.so.8.0 00:01:40.769 SYMLINK libspdk_iscsi.so 00:01:41.030 CC module/vfu_device/vfu_virtio.o 00:01:41.030 CC module/vfu_device/vfu_virtio_blk.o 00:01:41.030 CC module/vfu_device/vfu_virtio_scsi.o 00:01:41.030 CC module/vfu_device/vfu_virtio_rpc.o 00:01:41.030 CC module/env_dpdk/env_dpdk_rpc.o 00:01:41.286 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:41.286 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:41.286 CC module/accel/error/accel_error.o 00:01:41.286 CC module/accel/dsa/accel_dsa.o 00:01:41.286 CC module/accel/error/accel_error_rpc.o 00:01:41.286 CC module/accel/dsa/accel_dsa_rpc.o 00:01:41.286 CC module/scheduler/gscheduler/gscheduler.o 00:01:41.286 CC module/accel/ioat/accel_ioat.o 00:01:41.286 CC module/accel/iaa/accel_iaa.o 00:01:41.286 CC module/sock/posix/posix.o 00:01:41.286 CC module/accel/ioat/accel_ioat_rpc.o 00:01:41.286 CC module/accel/iaa/accel_iaa_rpc.o 00:01:41.286 CC module/blob/bdev/blob_bdev.o 00:01:41.286 CC module/keyring/file/keyring.o 00:01:41.286 CC module/keyring/file/keyring_rpc.o 00:01:41.286 LIB libspdk_env_dpdk_rpc.a 00:01:41.287 SO libspdk_env_dpdk_rpc.so.6.0 00:01:41.287 SYMLINK libspdk_env_dpdk_rpc.so 00:01:41.287 LIB libspdk_scheduler_dpdk_governor.a 00:01:41.545 LIB libspdk_scheduler_gscheduler.a 00:01:41.545 LIB libspdk_keyring_file.a 00:01:41.545 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:41.545 LIB libspdk_scheduler_dynamic.a 00:01:41.545 LIB libspdk_accel_error.a 00:01:41.545 SO libspdk_scheduler_gscheduler.so.4.0 00:01:41.545 SO libspdk_keyring_file.so.1.0 00:01:41.545 LIB libspdk_accel_ioat.a 00:01:41.545 LIB libspdk_accel_iaa.a 00:01:41.545 SO libspdk_scheduler_dynamic.so.4.0 00:01:41.545 SO libspdk_accel_error.so.2.0 00:01:41.545 SYMLINK libspdk_scheduler_gscheduler.so 00:01:41.545 SO libspdk_accel_ioat.so.6.0 00:01:41.545 LIB libspdk_accel_dsa.a 00:01:41.545 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:41.545 SO libspdk_accel_iaa.so.3.0 00:01:41.545 SYMLINK libspdk_keyring_file.so 00:01:41.545 LIB libspdk_blob_bdev.a 00:01:41.545 SO libspdk_accel_dsa.so.5.0 00:01:41.545 SYMLINK libspdk_scheduler_dynamic.so 00:01:41.545 SYMLINK libspdk_accel_error.so 00:01:41.545 SO libspdk_blob_bdev.so.11.0 00:01:41.545 SYMLINK libspdk_accel_ioat.so 00:01:41.545 SYMLINK libspdk_accel_iaa.so 00:01:41.545 SYMLINK libspdk_accel_dsa.so 00:01:41.545 SYMLINK libspdk_blob_bdev.so 00:01:41.804 LIB libspdk_vfu_device.a 00:01:41.804 SO libspdk_vfu_device.so.3.0 00:01:41.804 SYMLINK libspdk_vfu_device.so 00:01:42.062 LIB libspdk_sock_posix.a 00:01:42.062 SO libspdk_sock_posix.so.6.0 00:01:42.062 CC module/bdev/delay/vbdev_delay.o 00:01:42.062 CC module/blobfs/bdev/blobfs_bdev.o 00:01:42.062 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:42.062 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:42.062 CC module/bdev/error/vbdev_error_rpc.o 00:01:42.062 CC module/bdev/error/vbdev_error.o 00:01:42.062 CC module/bdev/gpt/gpt.o 00:01:42.062 CC module/bdev/raid/bdev_raid.o 00:01:42.062 CC module/bdev/raid/bdev_raid_rpc.o 00:01:42.062 CC module/bdev/gpt/vbdev_gpt.o 00:01:42.062 CC module/bdev/ftl/bdev_ftl.o 00:01:42.062 CC module/bdev/raid/raid0.o 00:01:42.062 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:42.062 CC module/bdev/lvol/vbdev_lvol.o 00:01:42.062 CC module/bdev/raid/bdev_raid_sb.o 00:01:42.062 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:42.062 CC module/bdev/raid/concat.o 00:01:42.063 CC module/bdev/raid/raid1.o 00:01:42.063 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:42.063 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:42.063 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:42.063 CC module/bdev/nvme/bdev_nvme.o 00:01:42.063 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:42.063 CC module/bdev/null/bdev_null.o 00:01:42.063 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:42.063 CC module/bdev/null/bdev_null_rpc.o 00:01:42.063 CC module/bdev/nvme/nvme_rpc.o 00:01:42.063 CC module/bdev/passthru/vbdev_passthru.o 00:01:42.063 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:42.063 CC module/bdev/nvme/bdev_mdns_client.o 00:01:42.063 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:42.063 CC module/bdev/nvme/vbdev_opal.o 00:01:42.063 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:42.063 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:42.063 CC module/bdev/split/vbdev_split.o 00:01:42.063 CC module/bdev/malloc/bdev_malloc.o 00:01:42.063 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:42.063 CC module/bdev/aio/bdev_aio_rpc.o 00:01:42.063 CC module/bdev/split/vbdev_split_rpc.o 00:01:42.063 CC module/bdev/iscsi/bdev_iscsi.o 00:01:42.063 CC module/bdev/aio/bdev_aio.o 00:01:42.063 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:42.063 SYMLINK libspdk_sock_posix.so 00:01:42.320 LIB libspdk_blobfs_bdev.a 00:01:42.320 SO libspdk_blobfs_bdev.so.6.0 00:01:42.578 LIB libspdk_bdev_split.a 00:01:42.578 LIB libspdk_bdev_gpt.a 00:01:42.578 LIB libspdk_bdev_null.a 00:01:42.578 SYMLINK libspdk_blobfs_bdev.so 00:01:42.578 SO libspdk_bdev_gpt.so.6.0 00:01:42.578 SO libspdk_bdev_split.so.6.0 00:01:42.578 LIB libspdk_bdev_error.a 00:01:42.578 LIB libspdk_bdev_aio.a 00:01:42.578 LIB libspdk_bdev_ftl.a 00:01:42.578 SO libspdk_bdev_null.so.6.0 00:01:42.578 LIB libspdk_bdev_passthru.a 00:01:42.578 SO libspdk_bdev_error.so.6.0 00:01:42.578 SO libspdk_bdev_aio.so.6.0 00:01:42.578 LIB libspdk_bdev_zone_block.a 00:01:42.578 SYMLINK libspdk_bdev_gpt.so 00:01:42.578 SO libspdk_bdev_ftl.so.6.0 00:01:42.578 SYMLINK libspdk_bdev_split.so 00:01:42.578 SO libspdk_bdev_passthru.so.6.0 00:01:42.578 LIB libspdk_bdev_malloc.a 00:01:42.578 SYMLINK libspdk_bdev_null.so 00:01:42.578 SO libspdk_bdev_zone_block.so.6.0 00:01:42.578 LIB libspdk_bdev_delay.a 00:01:42.578 LIB libspdk_bdev_iscsi.a 00:01:42.578 SYMLINK libspdk_bdev_error.so 00:01:42.578 SYMLINK libspdk_bdev_aio.so 00:01:42.578 SO libspdk_bdev_malloc.so.6.0 00:01:42.578 SO libspdk_bdev_delay.so.6.0 00:01:42.578 SYMLINK libspdk_bdev_passthru.so 00:01:42.578 SYMLINK libspdk_bdev_ftl.so 00:01:42.578 SO libspdk_bdev_iscsi.so.6.0 00:01:42.578 SYMLINK libspdk_bdev_zone_block.so 00:01:42.836 LIB libspdk_bdev_lvol.a 00:01:42.836 SYMLINK libspdk_bdev_delay.so 00:01:42.836 SYMLINK libspdk_bdev_malloc.so 00:01:42.836 SYMLINK libspdk_bdev_iscsi.so 00:01:42.836 SO libspdk_bdev_lvol.so.6.0 00:01:42.836 LIB libspdk_bdev_virtio.a 00:01:42.836 SO libspdk_bdev_virtio.so.6.0 00:01:42.836 SYMLINK libspdk_bdev_lvol.so 00:01:42.836 SYMLINK libspdk_bdev_virtio.so 00:01:43.094 LIB libspdk_bdev_raid.a 00:01:43.094 SO libspdk_bdev_raid.so.6.0 00:01:43.352 SYMLINK libspdk_bdev_raid.so 00:01:44.287 LIB libspdk_bdev_nvme.a 00:01:44.546 SO libspdk_bdev_nvme.so.7.0 00:01:44.546 SYMLINK libspdk_bdev_nvme.so 00:01:45.112 CC module/event/subsystems/vmd/vmd.o 00:01:45.112 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.112 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.112 CC module/event/subsystems/sock/sock.o 00:01:45.112 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.112 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:45.112 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.112 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:45.112 CC module/event/subsystems/keyring/keyring.o 00:01:45.371 LIB libspdk_event_sock.a 00:01:45.371 LIB libspdk_event_vhost_blk.a 00:01:45.371 LIB libspdk_event_scheduler.a 00:01:45.371 LIB libspdk_event_vmd.a 00:01:45.371 LIB libspdk_event_keyring.a 00:01:45.371 LIB libspdk_event_vfu_tgt.a 00:01:45.371 LIB libspdk_event_iobuf.a 00:01:45.371 SO libspdk_event_sock.so.5.0 00:01:45.371 SO libspdk_event_vhost_blk.so.3.0 00:01:45.371 SO libspdk_event_scheduler.so.4.0 00:01:45.371 SO libspdk_event_keyring.so.1.0 00:01:45.371 SO libspdk_event_vmd.so.6.0 00:01:45.371 SO libspdk_event_vfu_tgt.so.3.0 00:01:45.371 SO libspdk_event_iobuf.so.3.0 00:01:45.371 SYMLINK libspdk_event_sock.so 00:01:45.371 SYMLINK libspdk_event_vhost_blk.so 00:01:45.371 SYMLINK libspdk_event_scheduler.so 00:01:45.371 SYMLINK libspdk_event_keyring.so 00:01:45.371 SYMLINK libspdk_event_vfu_tgt.so 00:01:45.371 SYMLINK libspdk_event_vmd.so 00:01:45.371 SYMLINK libspdk_event_iobuf.so 00:01:45.938 CC module/event/subsystems/accel/accel.o 00:01:45.938 LIB libspdk_event_accel.a 00:01:45.938 SO libspdk_event_accel.so.6.0 00:01:45.938 SYMLINK libspdk_event_accel.so 00:01:46.196 CC module/event/subsystems/bdev/bdev.o 00:01:46.455 LIB libspdk_event_bdev.a 00:01:46.455 SO libspdk_event_bdev.so.6.0 00:01:46.713 SYMLINK libspdk_event_bdev.so 00:01:46.971 CC module/event/subsystems/scsi/scsi.o 00:01:46.971 CC module/event/subsystems/ublk/ublk.o 00:01:46.971 CC module/event/subsystems/nbd/nbd.o 00:01:46.971 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:46.971 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:46.971 LIB libspdk_event_ublk.a 00:01:46.971 LIB libspdk_event_scsi.a 00:01:46.971 LIB libspdk_event_nbd.a 00:01:47.231 SO libspdk_event_scsi.so.6.0 00:01:47.231 SO libspdk_event_ublk.so.3.0 00:01:47.231 SO libspdk_event_nbd.so.6.0 00:01:47.231 LIB libspdk_event_nvmf.a 00:01:47.231 SYMLINK libspdk_event_scsi.so 00:01:47.231 SYMLINK libspdk_event_nbd.so 00:01:47.231 SYMLINK libspdk_event_ublk.so 00:01:47.231 SO libspdk_event_nvmf.so.6.0 00:01:47.231 SYMLINK libspdk_event_nvmf.so 00:01:47.489 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.489 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.748 LIB libspdk_event_vhost_scsi.a 00:01:47.748 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.748 LIB libspdk_event_iscsi.a 00:01:47.748 SO libspdk_event_iscsi.so.6.0 00:01:47.748 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.748 SYMLINK libspdk_event_iscsi.so 00:01:48.007 SO libspdk.so.6.0 00:01:48.007 SYMLINK libspdk.so 00:01:48.266 CXX app/trace/trace.o 00:01:48.266 CC app/trace_record/trace_record.o 00:01:48.266 CC app/spdk_lspci/spdk_lspci.o 00:01:48.266 CC app/spdk_nvme_discover/discovery_aer.o 00:01:48.266 CC app/spdk_nvme_identify/identify.o 00:01:48.266 CC app/spdk_nvme_perf/perf.o 00:01:48.266 CC test/rpc_client/rpc_client_test.o 00:01:48.266 TEST_HEADER include/spdk/accel.h 00:01:48.266 CC app/spdk_top/spdk_top.o 00:01:48.266 TEST_HEADER include/spdk/assert.h 00:01:48.266 TEST_HEADER include/spdk/barrier.h 00:01:48.266 TEST_HEADER include/spdk/accel_module.h 00:01:48.266 TEST_HEADER include/spdk/bdev.h 00:01:48.266 TEST_HEADER include/spdk/base64.h 00:01:48.266 TEST_HEADER include/spdk/bdev_zone.h 00:01:48.266 TEST_HEADER include/spdk/bdev_module.h 00:01:48.266 TEST_HEADER include/spdk/bit_pool.h 00:01:48.266 TEST_HEADER include/spdk/bit_array.h 00:01:48.266 TEST_HEADER include/spdk/blob_bdev.h 00:01:48.266 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:48.266 TEST_HEADER include/spdk/blob.h 00:01:48.266 TEST_HEADER include/spdk/conf.h 00:01:48.266 TEST_HEADER include/spdk/blobfs.h 00:01:48.266 TEST_HEADER include/spdk/cpuset.h 00:01:48.266 TEST_HEADER include/spdk/config.h 00:01:48.266 TEST_HEADER include/spdk/crc16.h 00:01:48.266 TEST_HEADER include/spdk/crc64.h 00:01:48.266 TEST_HEADER include/spdk/crc32.h 00:01:48.266 TEST_HEADER include/spdk/dif.h 00:01:48.266 TEST_HEADER include/spdk/env_dpdk.h 00:01:48.266 TEST_HEADER include/spdk/dma.h 00:01:48.266 TEST_HEADER include/spdk/env.h 00:01:48.266 TEST_HEADER include/spdk/endian.h 00:01:48.266 TEST_HEADER include/spdk/file.h 00:01:48.266 TEST_HEADER include/spdk/event.h 00:01:48.266 TEST_HEADER include/spdk/ftl.h 00:01:48.266 TEST_HEADER include/spdk/fd.h 00:01:48.266 TEST_HEADER include/spdk/fd_group.h 00:01:48.266 TEST_HEADER include/spdk/gpt_spec.h 00:01:48.266 TEST_HEADER include/spdk/hexlify.h 00:01:48.266 TEST_HEADER include/spdk/histogram_data.h 00:01:48.266 TEST_HEADER include/spdk/idxd.h 00:01:48.266 TEST_HEADER include/spdk/init.h 00:01:48.266 CC app/spdk_dd/spdk_dd.o 00:01:48.266 TEST_HEADER include/spdk/idxd_spec.h 00:01:48.266 TEST_HEADER include/spdk/iscsi_spec.h 00:01:48.266 TEST_HEADER include/spdk/ioat.h 00:01:48.266 TEST_HEADER include/spdk/json.h 00:01:48.266 TEST_HEADER include/spdk/ioat_spec.h 00:01:48.266 CC app/iscsi_tgt/iscsi_tgt.o 00:01:48.266 TEST_HEADER include/spdk/jsonrpc.h 00:01:48.266 TEST_HEADER include/spdk/keyring.h 00:01:48.266 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:48.266 TEST_HEADER include/spdk/keyring_module.h 00:01:48.266 TEST_HEADER include/spdk/likely.h 00:01:48.266 TEST_HEADER include/spdk/log.h 00:01:48.266 TEST_HEADER include/spdk/lvol.h 00:01:48.266 TEST_HEADER include/spdk/memory.h 00:01:48.266 TEST_HEADER include/spdk/mmio.h 00:01:48.266 CC app/nvmf_tgt/nvmf_main.o 00:01:48.266 TEST_HEADER include/spdk/nbd.h 00:01:48.266 TEST_HEADER include/spdk/notify.h 00:01:48.266 TEST_HEADER include/spdk/nvme_intel.h 00:01:48.266 CC app/vhost/vhost.o 00:01:48.266 TEST_HEADER include/spdk/nvme.h 00:01:48.266 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:48.266 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:48.266 TEST_HEADER include/spdk/nvme_spec.h 00:01:48.266 TEST_HEADER include/spdk/nvme_zns.h 00:01:48.266 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:48.536 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:48.537 TEST_HEADER include/spdk/nvmf.h 00:01:48.537 TEST_HEADER include/spdk/nvmf_spec.h 00:01:48.537 TEST_HEADER include/spdk/nvmf_transport.h 00:01:48.537 TEST_HEADER include/spdk/opal.h 00:01:48.537 TEST_HEADER include/spdk/opal_spec.h 00:01:48.537 TEST_HEADER include/spdk/pipe.h 00:01:48.537 TEST_HEADER include/spdk/pci_ids.h 00:01:48.537 TEST_HEADER include/spdk/queue.h 00:01:48.537 TEST_HEADER include/spdk/reduce.h 00:01:48.537 CC app/spdk_tgt/spdk_tgt.o 00:01:48.537 TEST_HEADER include/spdk/rpc.h 00:01:48.537 TEST_HEADER include/spdk/scheduler.h 00:01:48.537 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.537 TEST_HEADER include/spdk/sock.h 00:01:48.537 TEST_HEADER include/spdk/scsi.h 00:01:48.537 TEST_HEADER include/spdk/stdinc.h 00:01:48.537 TEST_HEADER include/spdk/string.h 00:01:48.537 TEST_HEADER include/spdk/thread.h 00:01:48.537 TEST_HEADER include/spdk/trace.h 00:01:48.537 TEST_HEADER include/spdk/trace_parser.h 00:01:48.537 TEST_HEADER include/spdk/tree.h 00:01:48.537 TEST_HEADER include/spdk/util.h 00:01:48.537 TEST_HEADER include/spdk/ublk.h 00:01:48.537 TEST_HEADER include/spdk/uuid.h 00:01:48.537 TEST_HEADER include/spdk/version.h 00:01:48.537 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.537 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.537 TEST_HEADER include/spdk/vhost.h 00:01:48.537 TEST_HEADER include/spdk/vmd.h 00:01:48.537 TEST_HEADER include/spdk/zipf.h 00:01:48.537 CXX test/cpp_headers/accel.o 00:01:48.537 TEST_HEADER include/spdk/xor.h 00:01:48.537 CXX test/cpp_headers/accel_module.o 00:01:48.537 CXX test/cpp_headers/assert.o 00:01:48.537 CXX test/cpp_headers/base64.o 00:01:48.537 CXX test/cpp_headers/barrier.o 00:01:48.537 CXX test/cpp_headers/bdev.o 00:01:48.537 CXX test/cpp_headers/bit_array.o 00:01:48.537 CXX test/cpp_headers/bdev_module.o 00:01:48.537 CXX test/cpp_headers/bit_pool.o 00:01:48.537 CXX test/cpp_headers/bdev_zone.o 00:01:48.537 CXX test/cpp_headers/blobfs_bdev.o 00:01:48.537 CXX test/cpp_headers/blobfs.o 00:01:48.537 CXX test/cpp_headers/blob_bdev.o 00:01:48.537 CXX test/cpp_headers/blob.o 00:01:48.537 CXX test/cpp_headers/cpuset.o 00:01:48.537 CXX test/cpp_headers/crc16.o 00:01:48.537 CXX test/cpp_headers/conf.o 00:01:48.537 CXX test/cpp_headers/config.o 00:01:48.537 CXX test/cpp_headers/crc32.o 00:01:48.537 CXX test/cpp_headers/dma.o 00:01:48.537 CXX test/cpp_headers/crc64.o 00:01:48.537 CXX test/cpp_headers/dif.o 00:01:48.537 CXX test/cpp_headers/env_dpdk.o 00:01:48.537 CXX test/cpp_headers/env.o 00:01:48.537 CXX test/cpp_headers/endian.o 00:01:48.537 CXX test/cpp_headers/event.o 00:01:48.537 CXX test/cpp_headers/fd_group.o 00:01:48.537 CXX test/cpp_headers/fd.o 00:01:48.537 CXX test/cpp_headers/file.o 00:01:48.537 CXX test/cpp_headers/gpt_spec.o 00:01:48.537 CXX test/cpp_headers/ftl.o 00:01:48.537 CXX test/cpp_headers/hexlify.o 00:01:48.537 CXX test/cpp_headers/idxd.o 00:01:48.537 CXX test/cpp_headers/histogram_data.o 00:01:48.537 CXX test/cpp_headers/idxd_spec.o 00:01:48.537 CXX test/cpp_headers/init.o 00:01:48.537 CXX test/cpp_headers/ioat.o 00:01:48.537 CC examples/ioat/perf/perf.o 00:01:48.537 CC examples/ioat/verify/verify.o 00:01:48.537 CC examples/accel/perf/accel_perf.o 00:01:48.537 CC test/env/memory/memory_ut.o 00:01:48.537 CXX test/cpp_headers/ioat_spec.o 00:01:48.537 CC test/app/jsoncat/jsoncat.o 00:01:48.537 CC examples/nvme/hotplug/hotplug.o 00:01:48.537 CC test/app/histogram_perf/histogram_perf.o 00:01:48.537 CC app/fio/nvme/fio_plugin.o 00:01:48.537 CC examples/util/zipf/zipf.o 00:01:48.537 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.537 CC examples/nvme/arbitration/arbitration.o 00:01:48.537 CC examples/nvme/reconnect/reconnect.o 00:01:48.537 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:48.537 CC examples/vmd/lsvmd/lsvmd.o 00:01:48.537 CC test/nvme/e2edp/nvme_dp.o 00:01:48.537 CC test/app/stub/stub.o 00:01:48.537 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.537 CC test/env/vtophys/vtophys.o 00:01:48.537 CC test/nvme/err_injection/err_injection.o 00:01:48.537 CC examples/sock/hello_world/hello_sock.o 00:01:48.537 CC test/event/reactor/reactor.o 00:01:48.537 CC examples/vmd/led/led.o 00:01:48.537 CC test/event/event_perf/event_perf.o 00:01:48.537 CC test/env/pci/pci_ut.o 00:01:48.537 CC test/nvme/reserve/reserve.o 00:01:48.537 CC test/nvme/compliance/nvme_compliance.o 00:01:48.537 CC examples/nvme/abort/abort.o 00:01:48.537 CC test/nvme/overhead/overhead.o 00:01:48.537 CC examples/blob/cli/blobcli.o 00:01:48.537 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.801 CC examples/nvme/hello_world/hello_world.o 00:01:48.801 CC test/nvme/boot_partition/boot_partition.o 00:01:48.801 CC test/nvme/simple_copy/simple_copy.o 00:01:48.801 CC examples/idxd/perf/perf.o 00:01:48.801 CC app/fio/bdev/fio_plugin.o 00:01:48.801 CC test/thread/poller_perf/poller_perf.o 00:01:48.801 CC test/nvme/startup/startup.o 00:01:48.801 CC test/event/reactor_perf/reactor_perf.o 00:01:48.801 CC test/nvme/sgl/sgl.o 00:01:48.801 CC examples/blob/hello_world/hello_blob.o 00:01:48.801 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:48.801 CC test/dma/test_dma/test_dma.o 00:01:48.801 CC examples/nvmf/nvmf/nvmf.o 00:01:48.801 CC test/nvme/fdp/fdp.o 00:01:48.801 CC examples/thread/thread/thread_ex.o 00:01:48.801 CC test/accel/dif/dif.o 00:01:48.801 CC test/nvme/fused_ordering/fused_ordering.o 00:01:48.801 CC test/nvme/aer/aer.o 00:01:48.801 CC test/nvme/connect_stress/connect_stress.o 00:01:48.801 CC test/app/bdev_svc/bdev_svc.o 00:01:48.801 CC test/nvme/reset/reset.o 00:01:48.801 CC test/event/app_repeat/app_repeat.o 00:01:48.801 CC test/blobfs/mkfs/mkfs.o 00:01:48.801 CC test/nvme/cuse/cuse.o 00:01:48.801 CC test/event/scheduler/scheduler.o 00:01:48.801 CC examples/bdev/hello_world/hello_bdev.o 00:01:48.801 CC test/bdev/bdevio/bdevio.o 00:01:48.801 CC examples/bdev/bdevperf/bdevperf.o 00:01:48.801 LINK spdk_nvme_discover 00:01:49.059 CC test/env/mem_callbacks/mem_callbacks.o 00:01:49.059 LINK spdk_lspci 00:01:49.059 LINK nvmf_tgt 00:01:49.059 LINK vhost 00:01:49.059 LINK spdk_trace_record 00:01:49.059 LINK jsoncat 00:01:49.059 LINK iscsi_tgt 00:01:49.059 LINK lsvmd 00:01:49.059 LINK histogram_perf 00:01:49.059 LINK reactor 00:01:49.059 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:49.059 CC test/lvol/esnap/esnap.o 00:01:49.059 LINK vtophys 00:01:49.330 CXX test/cpp_headers/iscsi_spec.o 00:01:49.330 CXX test/cpp_headers/json.o 00:01:49.330 LINK event_perf 00:01:49.330 CXX test/cpp_headers/jsonrpc.o 00:01:49.330 CXX test/cpp_headers/keyring.o 00:01:49.330 CXX test/cpp_headers/keyring_module.o 00:01:49.330 LINK poller_perf 00:01:49.330 CXX test/cpp_headers/likely.o 00:01:49.330 CXX test/cpp_headers/log.o 00:01:49.330 CXX test/cpp_headers/lvol.o 00:01:49.330 CXX test/cpp_headers/memory.o 00:01:49.330 CXX test/cpp_headers/mmio.o 00:01:49.330 LINK ioat_perf 00:01:49.330 CXX test/cpp_headers/notify.o 00:01:49.330 LINK rpc_client_test 00:01:49.330 CXX test/cpp_headers/nbd.o 00:01:49.330 CXX test/cpp_headers/nvme.o 00:01:49.330 CXX test/cpp_headers/nvme_intel.o 00:01:49.330 LINK cmb_copy 00:01:49.330 LINK pmr_persistence 00:01:49.330 LINK interrupt_tgt 00:01:49.330 LINK startup 00:01:49.330 LINK app_repeat 00:01:49.330 LINK verify 00:01:49.330 CXX test/cpp_headers/nvme_ocssd.o 00:01:49.330 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:49.330 LINK hotplug 00:01:49.330 LINK connect_stress 00:01:49.330 LINK doorbell_aers 00:01:49.330 CXX test/cpp_headers/nvme_spec.o 00:01:49.330 CXX test/cpp_headers/nvme_zns.o 00:01:49.330 LINK fused_ordering 00:01:49.330 LINK bdev_svc 00:01:49.330 LINK zipf 00:01:49.330 CXX test/cpp_headers/nvmf_cmd.o 00:01:49.330 LINK spdk_dd 00:01:49.330 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:49.330 LINK nvme_dp 00:01:49.330 CXX test/cpp_headers/nvmf.o 00:01:49.330 CXX test/cpp_headers/nvmf_spec.o 00:01:49.330 LINK led 00:01:49.330 CXX test/cpp_headers/nvmf_transport.o 00:01:49.330 LINK hello_blob 00:01:49.330 LINK spdk_tgt 00:01:49.330 LINK reactor_perf 00:01:49.330 LINK env_dpdk_post_init 00:01:49.330 LINK err_injection 00:01:49.330 LINK spdk_trace 00:01:49.330 LINK hello_bdev 00:01:49.330 LINK boot_partition 00:01:49.330 CXX test/cpp_headers/opal.o 00:01:49.331 CXX test/cpp_headers/opal_spec.o 00:01:49.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:49.590 CXX test/cpp_headers/pci_ids.o 00:01:49.590 CXX test/cpp_headers/pipe.o 00:01:49.590 CXX test/cpp_headers/queue.o 00:01:49.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.590 CXX test/cpp_headers/reduce.o 00:01:49.590 CXX test/cpp_headers/rpc.o 00:01:49.590 CXX test/cpp_headers/scheduler.o 00:01:49.590 LINK stub 00:01:49.590 CXX test/cpp_headers/scsi.o 00:01:49.590 LINK nvme_compliance 00:01:49.590 CXX test/cpp_headers/scsi_spec.o 00:01:49.590 CXX test/cpp_headers/sock.o 00:01:49.590 LINK arbitration 00:01:49.590 CXX test/cpp_headers/stdinc.o 00:01:49.590 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:49.590 CXX test/cpp_headers/string.o 00:01:49.590 CXX test/cpp_headers/thread.o 00:01:49.590 CXX test/cpp_headers/trace.o 00:01:49.590 CXX test/cpp_headers/trace_parser.o 00:01:49.590 LINK fdp 00:01:49.590 CXX test/cpp_headers/tree.o 00:01:49.590 CXX test/cpp_headers/ublk.o 00:01:49.590 CXX test/cpp_headers/util.o 00:01:49.590 CXX test/cpp_headers/uuid.o 00:01:49.590 CXX test/cpp_headers/version.o 00:01:49.590 CXX test/cpp_headers/vfio_user_pci.o 00:01:49.590 LINK reserve 00:01:49.590 LINK simple_copy 00:01:49.590 CXX test/cpp_headers/vfio_user_spec.o 00:01:49.590 CXX test/cpp_headers/vhost.o 00:01:49.590 CXX test/cpp_headers/vmd.o 00:01:49.590 LINK mkfs 00:01:49.590 CXX test/cpp_headers/xor.o 00:01:49.590 CXX test/cpp_headers/zipf.o 00:01:49.590 LINK overhead 00:01:49.590 LINK hello_sock 00:01:49.590 LINK hello_world 00:01:49.590 LINK accel_perf 00:01:49.590 LINK sgl 00:01:49.590 LINK thread 00:01:49.590 LINK scheduler 00:01:49.590 LINK reset 00:01:49.848 LINK aer 00:01:49.848 LINK nvmf 00:01:49.848 LINK idxd_perf 00:01:49.848 LINK spdk_nvme 00:01:49.848 LINK reconnect 00:01:49.848 LINK abort 00:01:49.848 LINK dif 00:01:49.848 LINK nvme_fuzz 00:01:49.848 LINK test_dma 00:01:49.848 LINK pci_ut 00:01:50.107 LINK bdevio 00:01:50.107 LINK spdk_top 00:01:50.107 LINK spdk_nvme_identify 00:01:50.107 LINK blobcli 00:01:50.107 LINK spdk_bdev 00:01:50.107 LINK mem_callbacks 00:01:50.107 LINK nvme_manage 00:01:50.365 LINK vhost_fuzz 00:01:50.365 LINK memory_ut 00:01:50.365 LINK bdevperf 00:01:50.365 LINK spdk_nvme_perf 00:01:50.623 LINK cuse 00:01:51.560 LINK iscsi_fuzz 00:01:52.498 LINK esnap 00:01:53.066 00:01:53.066 real 0m48.700s 00:01:53.066 user 8m8.636s 00:01:53.066 sys 4m23.089s 00:01:53.066 03:53:07 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:53.066 03:53:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.066 ************************************ 00:01:53.066 END TEST make 00:01:53.066 ************************************ 00:01:53.066 03:53:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:53.066 03:53:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:53.066 03:53:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:53.066 03:53:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.066 03:53:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:53.066 03:53:07 -- pm/common@45 -- $ pid=3519041 00:01:53.066 03:53:07 -- pm/common@52 -- $ sudo kill -TERM 3519041 00:01:53.066 03:53:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.066 03:53:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:53.066 03:53:07 -- pm/common@45 -- $ pid=3519043 00:01:53.066 03:53:07 -- pm/common@52 -- $ sudo kill -TERM 3519043 00:01:53.066 03:53:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.066 03:53:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:53.066 03:53:07 -- pm/common@45 -- $ pid=3519045 00:01:53.066 03:53:07 -- pm/common@52 -- $ sudo kill -TERM 3519045 00:01:53.326 03:53:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.326 03:53:07 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:53.326 03:53:07 -- pm/common@45 -- $ pid=3519046 00:01:53.326 03:53:07 -- pm/common@52 -- $ sudo kill -TERM 3519046 00:01:53.326 03:53:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.326 03:53:07 -- nvmf/common.sh@7 -- # uname -s 00:01:53.326 03:53:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.326 03:53:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.326 03:53:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.326 03:53:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.326 03:53:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.326 03:53:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.326 03:53:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.326 03:53:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.326 03:53:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.326 03:53:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.326 03:53:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:01:53.326 03:53:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:01:53.326 03:53:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.326 03:53:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.326 03:53:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:53.326 03:53:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.326 03:53:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.326 03:53:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.326 03:53:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.326 03:53:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.326 03:53:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.326 03:53:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.326 03:53:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.326 03:53:07 -- paths/export.sh@5 -- # export PATH 00:01:53.326 03:53:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.326 03:53:07 -- nvmf/common.sh@47 -- # : 0 00:01:53.326 03:53:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.326 03:53:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.326 03:53:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.326 03:53:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.326 03:53:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.326 03:53:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.326 03:53:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.326 03:53:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.326 03:53:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.326 03:53:07 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.326 03:53:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.326 03:53:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.326 03:53:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.326 03:53:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.326 03:53:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.326 03:53:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.326 03:53:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.326 03:53:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.326 03:53:07 -- spdk/autotest.sh@48 -- # udevadm_pid=3579709 00:01:53.326 03:53:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.326 03:53:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.326 03:53:07 -- pm/common@17 -- # local monitor 00:01:53.326 03:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.326 03:53:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3579711 00:01:53.326 03:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.326 03:53:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3579714 00:01:53.326 03:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.326 03:53:07 -- pm/common@21 -- # date +%s 00:01:53.326 03:53:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3579716 00:01:53.326 03:53:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.326 03:53:07 -- pm/common@21 -- # date +%s 00:01:53.326 03:53:07 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3579719 00:01:53.326 03:53:07 -- pm/common@26 -- # sleep 1 00:01:53.326 03:53:07 -- pm/common@21 -- # date +%s 00:01:53.586 03:53:07 -- pm/common@21 -- # date +%s 00:01:53.586 03:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491587 00:01:53.586 03:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491587 00:01:53.586 03:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491587 00:01:53.586 03:53:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713491587 00:01:53.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491587_collect-vmstat.pm.log 00:01:53.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491587_collect-cpu-load.pm.log 00:01:53.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491587_collect-bmc-pm.bmc.pm.log 00:01:53.586 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713491587_collect-cpu-temp.pm.log 00:01:54.524 03:53:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.525 03:53:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.525 03:53:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:54.525 03:53:08 -- common/autotest_common.sh@10 -- # set +x 00:01:54.525 03:53:08 -- spdk/autotest.sh@59 -- # create_test_list 00:01:54.525 03:53:08 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:54.525 03:53:08 -- common/autotest_common.sh@10 -- # set +x 00:01:54.525 03:53:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:54.525 03:53:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.525 03:53:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.525 03:53:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:54.525 03:53:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.525 03:53:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:54.525 03:53:08 -- common/autotest_common.sh@1441 -- # uname 00:01:54.525 03:53:08 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:54.525 03:53:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:54.525 03:53:08 -- common/autotest_common.sh@1461 -- # uname 00:01:54.525 03:53:08 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:54.525 03:53:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:54.525 03:53:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:54.525 03:53:08 -- spdk/autotest.sh@72 -- # hash lcov 00:01:54.525 03:53:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:54.525 03:53:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:54.525 --rc lcov_branch_coverage=1 00:01:54.525 --rc lcov_function_coverage=1 00:01:54.525 --rc genhtml_branch_coverage=1 00:01:54.525 --rc genhtml_function_coverage=1 00:01:54.525 --rc genhtml_legend=1 00:01:54.525 --rc geninfo_all_blocks=1 00:01:54.525 ' 00:01:54.525 03:53:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:54.525 --rc lcov_branch_coverage=1 00:01:54.525 --rc lcov_function_coverage=1 00:01:54.525 --rc genhtml_branch_coverage=1 00:01:54.525 --rc genhtml_function_coverage=1 00:01:54.525 --rc genhtml_legend=1 00:01:54.525 --rc geninfo_all_blocks=1 00:01:54.525 ' 00:01:54.525 03:53:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:54.525 --rc lcov_branch_coverage=1 00:01:54.525 --rc lcov_function_coverage=1 00:01:54.525 --rc genhtml_branch_coverage=1 00:01:54.525 --rc genhtml_function_coverage=1 00:01:54.525 --rc genhtml_legend=1 00:01:54.525 --rc geninfo_all_blocks=1 00:01:54.525 --no-external' 00:01:54.525 03:53:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:54.525 --rc lcov_branch_coverage=1 00:01:54.525 --rc lcov_function_coverage=1 00:01:54.525 --rc genhtml_branch_coverage=1 00:01:54.525 --rc genhtml_function_coverage=1 00:01:54.525 --rc genhtml_legend=1 00:01:54.525 --rc geninfo_all_blocks=1 00:01:54.525 --no-external' 00:01:54.525 03:53:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:54.525 lcov: LCOV version 1.14 00:01:54.525 03:53:09 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:09.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:09.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:09.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:09.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:09.432 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:27.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:27.516 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:27.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:27.516 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:27.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:27.517 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:27.518 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:28.086 03:53:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:28.086 03:53:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:28.086 03:53:42 -- common/autotest_common.sh@10 -- # set +x 00:02:28.086 03:53:42 -- spdk/autotest.sh@91 -- # rm -f 00:02:28.087 03:53:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.376 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:02:31.376 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:31.376 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:31.376 03:53:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:31.376 03:53:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:31.376 03:53:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:31.376 03:53:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:31.376 03:53:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:31.376 03:53:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:31.376 03:53:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:31.376 03:53:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:31.376 03:53:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:31.376 03:53:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:31.376 03:53:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:31.376 03:53:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:31.376 03:53:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:31.376 03:53:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:31.376 03:53:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:31.376 No valid GPT data, bailing 00:02:31.376 03:53:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:31.376 03:53:45 -- scripts/common.sh@391 -- # pt= 00:02:31.376 03:53:45 -- scripts/common.sh@392 -- # return 1 00:02:31.376 03:53:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:31.376 1+0 records in 00:02:31.376 1+0 records out 00:02:31.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00270217 s, 388 MB/s 00:02:31.376 03:53:45 -- spdk/autotest.sh@118 -- # sync 00:02:31.376 03:53:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:31.376 03:53:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:31.376 03:53:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:37.953 03:53:51 -- spdk/autotest.sh@124 -- # uname -s 00:02:37.953 03:53:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:37.953 03:53:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.953 03:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.953 03:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.953 03:53:51 -- common/autotest_common.sh@10 -- # set +x 00:02:37.953 ************************************ 00:02:37.953 START TEST setup.sh 00:02:37.953 ************************************ 00:02:37.953 03:53:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:37.953 * Looking for test storage... 00:02:37.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.953 03:53:51 -- setup/test-setup.sh@10 -- # uname -s 00:02:37.953 03:53:51 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:37.953 03:53:51 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:37.953 03:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.953 03:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.953 03:53:51 -- common/autotest_common.sh@10 -- # set +x 00:02:37.953 ************************************ 00:02:37.953 START TEST acl 00:02:37.953 ************************************ 00:02:37.953 03:53:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:37.953 * Looking for test storage... 00:02:37.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.953 03:53:51 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:37.953 03:53:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:37.953 03:53:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:37.953 03:53:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:37.953 03:53:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:37.953 03:53:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:37.953 03:53:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:37.953 03:53:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:37.953 03:53:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:37.953 03:53:51 -- setup/acl.sh@12 -- # devs=() 00:02:37.953 03:53:51 -- setup/acl.sh@12 -- # declare -a devs 00:02:37.953 03:53:51 -- setup/acl.sh@13 -- # drivers=() 00:02:37.953 03:53:51 -- setup/acl.sh@13 -- # declare -A drivers 00:02:37.953 03:53:51 -- setup/acl.sh@51 -- # setup reset 00:02:37.953 03:53:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.953 03:53:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.240 03:53:55 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:41.240 03:53:55 -- setup/acl.sh@16 -- # local dev driver 00:02:41.240 03:53:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:41.240 03:53:55 -- setup/acl.sh@15 -- # setup output status 00:02:41.240 03:53:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.240 03:53:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:43.788 Hugepages 00:02:43.788 node hugesize free / total 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 00:02:43.788 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:57 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:43.788 03:53:57 -- setup/acl.sh@20 -- # continue 00:02:43.788 03:53:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:58 -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:02:43.788 03:53:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:43.788 03:53:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:02:43.788 03:53:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:43.788 03:53:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:43.788 03:53:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.788 03:53:58 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:43.788 03:53:58 -- setup/acl.sh@54 -- # run_test denied denied 00:02:43.788 03:53:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.788 03:53:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.788 03:53:58 -- common/autotest_common.sh@10 -- # set +x 00:02:43.788 ************************************ 00:02:43.788 START TEST denied 00:02:43.788 ************************************ 00:02:43.788 03:53:58 -- common/autotest_common.sh@1111 -- # denied 00:02:43.788 03:53:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:02:43.788 03:53:58 -- setup/acl.sh@38 -- # setup output config 00:02:43.788 03:53:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:02:43.788 03:53:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.788 03:53:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:47.077 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:02:47.077 03:54:01 -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:02:47.078 03:54:01 -- setup/acl.sh@28 -- # local dev driver 00:02:47.078 03:54:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:47.078 03:54:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:02:47.078 03:54:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:02:47.078 03:54:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:47.078 03:54:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:47.078 03:54:01 -- setup/acl.sh@41 -- # setup reset 00:02:47.078 03:54:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:47.078 03:54:01 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.303 00:02:51.303 real 0m7.107s 00:02:51.303 user 0m2.259s 00:02:51.303 sys 0m4.115s 00:02:51.303 03:54:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.303 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:02:51.303 ************************************ 00:02:51.303 END TEST denied 00:02:51.303 ************************************ 00:02:51.303 03:54:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:51.303 03:54:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.303 03:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.303 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:02:51.303 ************************************ 00:02:51.303 START TEST allowed 00:02:51.303 ************************************ 00:02:51.303 03:54:05 -- common/autotest_common.sh@1111 -- # allowed 00:02:51.303 03:54:05 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:02:51.303 03:54:05 -- setup/acl.sh@45 -- # setup output config 00:02:51.303 03:54:05 -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:02:51.303 03:54:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.303 03:54:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.498 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.498 03:54:09 -- setup/acl.sh@47 -- # verify 00:02:55.498 03:54:09 -- setup/acl.sh@28 -- # local dev driver 00:02:55.498 03:54:09 -- setup/acl.sh@48 -- # setup reset 00:02:55.498 03:54:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.498 03:54:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.035 00:02:58.035 real 0m6.930s 00:02:58.035 user 0m2.041s 00:02:58.035 sys 0m3.910s 00:02:58.035 03:54:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.035 03:54:12 -- common/autotest_common.sh@10 -- # set +x 00:02:58.035 ************************************ 00:02:58.035 END TEST allowed 00:02:58.035 ************************************ 00:02:58.035 00:02:58.035 real 0m20.494s 00:02:58.035 user 0m6.756s 00:02:58.035 sys 0m12.216s 00:02:58.035 03:54:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:58.035 03:54:12 -- common/autotest_common.sh@10 -- # set +x 00:02:58.035 ************************************ 00:02:58.035 END TEST acl 00:02:58.035 ************************************ 00:02:58.035 03:54:12 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.035 03:54:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.035 03:54:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.035 03:54:12 -- common/autotest_common.sh@10 -- # set +x 00:02:58.294 ************************************ 00:02:58.294 START TEST hugepages 00:02:58.294 ************************************ 00:02:58.295 03:54:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:58.295 * Looking for test storage... 00:02:58.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:58.295 03:54:12 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:58.295 03:54:12 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:58.295 03:54:12 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:58.295 03:54:12 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:58.295 03:54:12 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:58.295 03:54:12 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:58.295 03:54:12 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:58.295 03:54:12 -- setup/common.sh@18 -- # local node= 00:02:58.295 03:54:12 -- setup/common.sh@19 -- # local var val 00:02:58.295 03:54:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.295 03:54:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.295 03:54:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.295 03:54:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.295 03:54:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.295 03:54:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 72769460 kB' 'MemAvailable: 76311584 kB' 'Buffers: 3728 kB' 'Cached: 11544972 kB' 'SwapCached: 0 kB' 'Active: 8545124 kB' 'Inactive: 3492160 kB' 'Active(anon): 7975928 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491968 kB' 'Mapped: 185228 kB' 'Shmem: 7487344 kB' 'KReclaimable: 268324 kB' 'Slab: 897632 kB' 'SReclaimable: 268324 kB' 'SUnreclaim: 629308 kB' 'KernelStack: 22416 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434728 kB' 'Committed_AS: 9294832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219856 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.295 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.295 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # continue 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.296 03:54:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.296 03:54:12 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:58.296 03:54:12 -- setup/common.sh@33 -- # echo 2048 00:02:58.296 03:54:12 -- setup/common.sh@33 -- # return 0 00:02:58.296 03:54:12 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:58.296 03:54:12 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:58.296 03:54:12 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:58.296 03:54:12 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:58.296 03:54:12 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:58.296 03:54:12 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:58.296 03:54:12 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:58.296 03:54:12 -- setup/hugepages.sh@207 -- # get_nodes 00:02:58.296 03:54:12 -- setup/hugepages.sh@27 -- # local node 00:02:58.296 03:54:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.296 03:54:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:58.296 03:54:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:58.296 03:54:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:58.296 03:54:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:58.296 03:54:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:58.296 03:54:12 -- setup/hugepages.sh@208 -- # clear_hp 00:02:58.296 03:54:12 -- setup/hugepages.sh@37 -- # local node hp 00:02:58.296 03:54:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.296 03:54:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.296 03:54:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.296 03:54:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.296 03:54:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.296 03:54:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:58.296 03:54:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.296 03:54:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.296 03:54:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:58.296 03:54:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:58.296 03:54:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:58.296 03:54:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:58.296 03:54:12 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:58.296 03:54:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:58.296 03:54:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:58.297 03:54:12 -- common/autotest_common.sh@10 -- # set +x 00:02:58.556 ************************************ 00:02:58.556 START TEST default_setup 00:02:58.556 ************************************ 00:02:58.556 03:54:12 -- common/autotest_common.sh@1111 -- # default_setup 00:02:58.556 03:54:12 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:58.556 03:54:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:58.556 03:54:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:58.556 03:54:12 -- setup/hugepages.sh@51 -- # shift 00:02:58.556 03:54:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:58.556 03:54:12 -- setup/hugepages.sh@52 -- # local node_ids 00:02:58.556 03:54:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:58.556 03:54:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:58.556 03:54:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:58.556 03:54:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:58.556 03:54:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:58.556 03:54:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:58.556 03:54:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:58.556 03:54:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:58.556 03:54:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:58.556 03:54:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:58.556 03:54:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:58.556 03:54:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:58.556 03:54:12 -- setup/hugepages.sh@73 -- # return 0 00:02:58.556 03:54:12 -- setup/hugepages.sh@137 -- # setup output 00:02:58.556 03:54:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.556 03:54:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.095 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:01.095 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:01.354 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:02.294 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:02.294 03:54:16 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:02.294 03:54:16 -- setup/hugepages.sh@89 -- # local node 00:03:02.294 03:54:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.294 03:54:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.294 03:54:16 -- setup/hugepages.sh@92 -- # local surp 00:03:02.294 03:54:16 -- setup/hugepages.sh@93 -- # local resv 00:03:02.294 03:54:16 -- setup/hugepages.sh@94 -- # local anon 00:03:02.294 03:54:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.294 03:54:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.294 03:54:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.294 03:54:16 -- setup/common.sh@18 -- # local node= 00:03:02.294 03:54:16 -- setup/common.sh@19 -- # local var val 00:03:02.294 03:54:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.294 03:54:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.294 03:54:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.294 03:54:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.294 03:54:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.294 03:54:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.294 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.294 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.294 03:54:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74888056 kB' 'MemAvailable: 78429908 kB' 'Buffers: 3728 kB' 'Cached: 11545076 kB' 'SwapCached: 0 kB' 'Active: 8564252 kB' 'Inactive: 3492160 kB' 'Active(anon): 7995056 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510940 kB' 'Mapped: 185240 kB' 'Shmem: 7487448 kB' 'KReclaimable: 267780 kB' 'Slab: 895600 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627820 kB' 'KernelStack: 22704 kB' 'PageTables: 9452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9317804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220064 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:02.294 03:54:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.294 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.294 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.294 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.294 03:54:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.295 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.295 03:54:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.295 03:54:16 -- setup/common.sh@33 -- # echo 0 00:03:02.295 03:54:16 -- setup/common.sh@33 -- # return 0 00:03:02.295 03:54:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.295 03:54:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.295 03:54:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.295 03:54:16 -- setup/common.sh@18 -- # local node= 00:03:02.295 03:54:16 -- setup/common.sh@19 -- # local var val 00:03:02.295 03:54:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.295 03:54:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.295 03:54:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.295 03:54:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.295 03:54:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.296 03:54:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74893708 kB' 'MemAvailable: 78435560 kB' 'Buffers: 3728 kB' 'Cached: 11545080 kB' 'SwapCached: 0 kB' 'Active: 8563016 kB' 'Inactive: 3492160 kB' 'Active(anon): 7993820 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509708 kB' 'Mapped: 185208 kB' 'Shmem: 7487452 kB' 'KReclaimable: 267780 kB' 'Slab: 895628 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627848 kB' 'KernelStack: 22608 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9317820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.296 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.296 03:54:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.297 03:54:16 -- setup/common.sh@33 -- # echo 0 00:03:02.297 03:54:16 -- setup/common.sh@33 -- # return 0 00:03:02.297 03:54:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.297 03:54:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.297 03:54:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.297 03:54:16 -- setup/common.sh@18 -- # local node= 00:03:02.297 03:54:16 -- setup/common.sh@19 -- # local var val 00:03:02.297 03:54:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.297 03:54:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.297 03:54:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.297 03:54:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.297 03:54:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.297 03:54:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.297 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.297 03:54:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74891764 kB' 'MemAvailable: 78433616 kB' 'Buffers: 3728 kB' 'Cached: 11545092 kB' 'SwapCached: 0 kB' 'Active: 8563112 kB' 'Inactive: 3492160 kB' 'Active(anon): 7993916 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509788 kB' 'Mapped: 185208 kB' 'Shmem: 7487464 kB' 'KReclaimable: 267780 kB' 'Slab: 895628 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627848 kB' 'KernelStack: 22608 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9317968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.560 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.560 03:54:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.561 03:54:16 -- setup/common.sh@33 -- # echo 0 00:03:02.561 03:54:16 -- setup/common.sh@33 -- # return 0 00:03:02.561 03:54:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.561 03:54:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.561 nr_hugepages=1024 00:03:02.561 03:54:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.561 resv_hugepages=0 00:03:02.561 03:54:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.561 surplus_hugepages=0 00:03:02.561 03:54:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.561 anon_hugepages=0 00:03:02.561 03:54:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.561 03:54:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.561 03:54:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.561 03:54:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.561 03:54:16 -- setup/common.sh@18 -- # local node= 00:03:02.561 03:54:16 -- setup/common.sh@19 -- # local var val 00:03:02.561 03:54:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.561 03:54:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.561 03:54:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.561 03:54:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.561 03:54:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.561 03:54:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74890452 kB' 'MemAvailable: 78432304 kB' 'Buffers: 3728 kB' 'Cached: 11545104 kB' 'SwapCached: 0 kB' 'Active: 8563392 kB' 'Inactive: 3492160 kB' 'Active(anon): 7994196 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510056 kB' 'Mapped: 185208 kB' 'Shmem: 7487476 kB' 'KReclaimable: 267780 kB' 'Slab: 895628 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627848 kB' 'KernelStack: 22688 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9316472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.561 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.561 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.562 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.562 03:54:16 -- setup/common.sh@33 -- # echo 1024 00:03:02.562 03:54:16 -- setup/common.sh@33 -- # return 0 00:03:02.562 03:54:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.562 03:54:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.562 03:54:16 -- setup/hugepages.sh@27 -- # local node 00:03:02.562 03:54:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.562 03:54:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.562 03:54:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.562 03:54:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.562 03:54:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.562 03:54:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.562 03:54:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.562 03:54:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.562 03:54:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.562 03:54:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.562 03:54:16 -- setup/common.sh@18 -- # local node=0 00:03:02.562 03:54:16 -- setup/common.sh@19 -- # local var val 00:03:02.562 03:54:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.562 03:54:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.562 03:54:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.562 03:54:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.562 03:54:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.562 03:54:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.562 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41078360 kB' 'MemUsed: 6990036 kB' 'SwapCached: 0 kB' 'Active: 3705560 kB' 'Inactive: 119432 kB' 'Active(anon): 3345204 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390088 kB' 'Mapped: 184968 kB' 'AnonPages: 438192 kB' 'Shmem: 2910300 kB' 'KernelStack: 13416 kB' 'PageTables: 6196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 426292 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 293972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # continue 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.563 03:54:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.563 03:54:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.563 03:54:16 -- setup/common.sh@33 -- # echo 0 00:03:02.563 03:54:16 -- setup/common.sh@33 -- # return 0 00:03:02.563 03:54:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.563 03:54:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.563 03:54:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.563 03:54:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.563 03:54:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:02.563 node0=1024 expecting 1024 00:03:02.563 03:54:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:02.563 00:03:02.563 real 0m4.071s 00:03:02.563 user 0m1.309s 00:03:02.563 sys 0m1.977s 00:03:02.563 03:54:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.563 03:54:16 -- common/autotest_common.sh@10 -- # set +x 00:03:02.563 ************************************ 00:03:02.563 END TEST default_setup 00:03:02.563 ************************************ 00:03:02.564 03:54:16 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:02.564 03:54:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.564 03:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.564 03:54:16 -- common/autotest_common.sh@10 -- # set +x 00:03:02.823 ************************************ 00:03:02.823 START TEST per_node_1G_alloc 00:03:02.823 ************************************ 00:03:02.823 03:54:17 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:02.823 03:54:17 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:02.823 03:54:17 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:02.823 03:54:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:02.823 03:54:17 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:02.823 03:54:17 -- setup/hugepages.sh@51 -- # shift 00:03:02.823 03:54:17 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:02.823 03:54:17 -- setup/hugepages.sh@52 -- # local node_ids 00:03:02.823 03:54:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.823 03:54:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:02.823 03:54:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:02.823 03:54:17 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:02.823 03:54:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.823 03:54:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:02.823 03:54:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.823 03:54:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.823 03:54:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.823 03:54:17 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:02.823 03:54:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.823 03:54:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:02.823 03:54:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.823 03:54:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:02.823 03:54:17 -- setup/hugepages.sh@73 -- # return 0 00:03:02.823 03:54:17 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:02.823 03:54:17 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:02.823 03:54:17 -- setup/hugepages.sh@146 -- # setup output 00:03:02.823 03:54:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.823 03:54:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.363 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.363 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.363 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.363 03:54:19 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:05.363 03:54:19 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:05.363 03:54:19 -- setup/hugepages.sh@89 -- # local node 00:03:05.363 03:54:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.363 03:54:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.363 03:54:19 -- setup/hugepages.sh@92 -- # local surp 00:03:05.363 03:54:19 -- setup/hugepages.sh@93 -- # local resv 00:03:05.363 03:54:19 -- setup/hugepages.sh@94 -- # local anon 00:03:05.363 03:54:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.363 03:54:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.363 03:54:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.363 03:54:19 -- setup/common.sh@18 -- # local node= 00:03:05.363 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.363 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.363 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.363 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.363 03:54:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.363 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.363 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.363 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74885348 kB' 'MemAvailable: 78427200 kB' 'Buffers: 3728 kB' 'Cached: 11545192 kB' 'SwapCached: 0 kB' 'Active: 8563136 kB' 'Inactive: 3492160 kB' 'Active(anon): 7993940 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509088 kB' 'Mapped: 185320 kB' 'Shmem: 7487564 kB' 'KReclaimable: 267780 kB' 'Slab: 895592 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627812 kB' 'KernelStack: 22576 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9314276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220096 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.364 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.365 03:54:19 -- setup/common.sh@33 -- # echo 0 00:03:05.365 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.365 03:54:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.365 03:54:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.365 03:54:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.365 03:54:19 -- setup/common.sh@18 -- # local node= 00:03:05.365 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.365 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.365 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.365 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.365 03:54:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.365 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.365 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74887100 kB' 'MemAvailable: 78428952 kB' 'Buffers: 3728 kB' 'Cached: 11545196 kB' 'SwapCached: 0 kB' 'Active: 8562908 kB' 'Inactive: 3492160 kB' 'Active(anon): 7993712 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508868 kB' 'Mapped: 185312 kB' 'Shmem: 7487568 kB' 'KReclaimable: 267780 kB' 'Slab: 895564 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627784 kB' 'KernelStack: 22672 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9314288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220080 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.365 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.366 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.366 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.366 03:54:19 -- setup/common.sh@33 -- # echo 0 00:03:05.366 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.366 03:54:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.366 03:54:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.366 03:54:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.366 03:54:19 -- setup/common.sh@18 -- # local node= 00:03:05.366 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.366 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.366 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.367 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.367 03:54:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.367 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.367 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.367 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.367 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.367 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74888828 kB' 'MemAvailable: 78430680 kB' 'Buffers: 3728 kB' 'Cached: 11545208 kB' 'SwapCached: 0 kB' 'Active: 8559872 kB' 'Inactive: 3492160 kB' 'Active(anon): 7990676 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506788 kB' 'Mapped: 184276 kB' 'Shmem: 7487580 kB' 'KReclaimable: 267780 kB' 'Slab: 895528 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627748 kB' 'KernelStack: 22608 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9299564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:05.367 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.367 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.367 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.367 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.367 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.630 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.630 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.631 03:54:19 -- setup/common.sh@33 -- # echo 0 00:03:05.631 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.631 03:54:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.631 03:54:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.631 nr_hugepages=1024 00:03:05.631 03:54:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.631 resv_hugepages=0 00:03:05.631 03:54:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.631 surplus_hugepages=0 00:03:05.631 03:54:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.631 anon_hugepages=0 00:03:05.631 03:54:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.631 03:54:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.631 03:54:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.631 03:54:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.631 03:54:19 -- setup/common.sh@18 -- # local node= 00:03:05.631 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.631 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.631 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.631 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.631 03:54:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.631 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.631 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74893108 kB' 'MemAvailable: 78434960 kB' 'Buffers: 3728 kB' 'Cached: 11545224 kB' 'SwapCached: 0 kB' 'Active: 8559012 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989816 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505392 kB' 'Mapped: 184276 kB' 'Shmem: 7487596 kB' 'KReclaimable: 267780 kB' 'Slab: 895500 kB' 'SReclaimable: 267780 kB' 'SUnreclaim: 627720 kB' 'KernelStack: 22480 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9299436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220016 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.631 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.631 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.632 03:54:19 -- setup/common.sh@33 -- # echo 1024 00:03:05.632 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.632 03:54:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.632 03:54:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.632 03:54:19 -- setup/hugepages.sh@27 -- # local node 00:03:05.632 03:54:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.632 03:54:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.632 03:54:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.632 03:54:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.632 03:54:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.632 03:54:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.632 03:54:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.632 03:54:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.632 03:54:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.632 03:54:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.632 03:54:19 -- setup/common.sh@18 -- # local node=0 00:03:05.632 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.632 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.632 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.632 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.632 03:54:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.632 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.632 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.632 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.632 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 42123160 kB' 'MemUsed: 5945236 kB' 'SwapCached: 0 kB' 'Active: 3700964 kB' 'Inactive: 119432 kB' 'Active(anon): 3340608 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390116 kB' 'Mapped: 184012 kB' 'AnonPages: 433380 kB' 'Shmem: 2910328 kB' 'KernelStack: 13144 kB' 'PageTables: 5260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132320 kB' 'Slab: 426408 kB' 'SReclaimable: 132320 kB' 'SUnreclaim: 294088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.632 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.633 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.633 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.633 03:54:19 -- setup/common.sh@33 -- # echo 0 00:03:05.633 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.633 03:54:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.633 03:54:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.633 03:54:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.633 03:54:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.633 03:54:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.633 03:54:19 -- setup/common.sh@18 -- # local node=1 00:03:05.633 03:54:19 -- setup/common.sh@19 -- # local var val 00:03:05.633 03:54:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.634 03:54:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.634 03:54:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.634 03:54:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.634 03:54:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.634 03:54:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218160 kB' 'MemFree: 32773328 kB' 'MemUsed: 11444832 kB' 'SwapCached: 0 kB' 'Active: 4857708 kB' 'Inactive: 3372728 kB' 'Active(anon): 4648868 kB' 'Inactive(anon): 0 kB' 'Active(file): 208840 kB' 'Inactive(file): 3372728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8158836 kB' 'Mapped: 264 kB' 'AnonPages: 71676 kB' 'Shmem: 4577268 kB' 'KernelStack: 9240 kB' 'PageTables: 2992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135460 kB' 'Slab: 469028 kB' 'SReclaimable: 135460 kB' 'SUnreclaim: 333568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.634 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.634 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # continue 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.635 03:54:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.635 03:54:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.635 03:54:19 -- setup/common.sh@33 -- # echo 0 00:03:05.635 03:54:19 -- setup/common.sh@33 -- # return 0 00:03:05.635 03:54:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.635 03:54:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.635 03:54:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.635 03:54:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.635 03:54:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.635 node0=512 expecting 512 00:03:05.635 03:54:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.635 03:54:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.635 03:54:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.635 03:54:19 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:05.635 node1=512 expecting 512 00:03:05.635 03:54:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:05.635 00:03:05.635 real 0m2.905s 00:03:05.635 user 0m1.156s 00:03:05.635 sys 0m1.779s 00:03:05.635 03:54:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.635 03:54:19 -- common/autotest_common.sh@10 -- # set +x 00:03:05.635 ************************************ 00:03:05.635 END TEST per_node_1G_alloc 00:03:05.635 ************************************ 00:03:05.635 03:54:20 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:05.635 03:54:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.635 03:54:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.635 03:54:20 -- common/autotest_common.sh@10 -- # set +x 00:03:05.894 ************************************ 00:03:05.894 START TEST even_2G_alloc 00:03:05.894 ************************************ 00:03:05.894 03:54:20 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:05.894 03:54:20 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:05.894 03:54:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.894 03:54:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.894 03:54:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.894 03:54:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.894 03:54:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.894 03:54:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.894 03:54:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.894 03:54:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.894 03:54:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.894 03:54:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.894 03:54:20 -- setup/hugepages.sh@83 -- # : 512 00:03:05.894 03:54:20 -- setup/hugepages.sh@84 -- # : 1 00:03:05.894 03:54:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.894 03:54:20 -- setup/hugepages.sh@83 -- # : 0 00:03:05.894 03:54:20 -- setup/hugepages.sh@84 -- # : 0 00:03:05.894 03:54:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.894 03:54:20 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:05.894 03:54:20 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:05.894 03:54:20 -- setup/hugepages.sh@153 -- # setup output 00:03:05.895 03:54:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.895 03:54:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.467 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.467 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.467 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.730 03:54:23 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:08.730 03:54:23 -- setup/hugepages.sh@89 -- # local node 00:03:08.730 03:54:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.730 03:54:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.730 03:54:23 -- setup/hugepages.sh@92 -- # local surp 00:03:08.730 03:54:23 -- setup/hugepages.sh@93 -- # local resv 00:03:08.730 03:54:23 -- setup/hugepages.sh@94 -- # local anon 00:03:08.731 03:54:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.731 03:54:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.731 03:54:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.731 03:54:23 -- setup/common.sh@18 -- # local node= 00:03:08.731 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.731 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.731 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.731 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.731 03:54:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.731 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.731 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74909920 kB' 'MemAvailable: 78451756 kB' 'Buffers: 3728 kB' 'Cached: 11545320 kB' 'SwapCached: 0 kB' 'Active: 8557544 kB' 'Inactive: 3492160 kB' 'Active(anon): 7988348 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503932 kB' 'Mapped: 184248 kB' 'Shmem: 7487692 kB' 'KReclaimable: 267748 kB' 'Slab: 894396 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626648 kB' 'KernelStack: 22416 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9296908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219968 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.731 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.731 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.732 03:54:23 -- setup/common.sh@33 -- # echo 0 00:03:08.732 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.732 03:54:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.732 03:54:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.732 03:54:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.732 03:54:23 -- setup/common.sh@18 -- # local node= 00:03:08.732 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.732 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.732 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.732 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.732 03:54:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.732 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.732 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74911936 kB' 'MemAvailable: 78453772 kB' 'Buffers: 3728 kB' 'Cached: 11545320 kB' 'SwapCached: 0 kB' 'Active: 8557872 kB' 'Inactive: 3492160 kB' 'Active(anon): 7988676 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504276 kB' 'Mapped: 184244 kB' 'Shmem: 7487692 kB' 'KReclaimable: 267748 kB' 'Slab: 894364 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626616 kB' 'KernelStack: 22432 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9296920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219936 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.732 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.732 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.733 03:54:23 -- setup/common.sh@33 -- # echo 0 00:03:08.733 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.733 03:54:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.733 03:54:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.733 03:54:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.733 03:54:23 -- setup/common.sh@18 -- # local node= 00:03:08.733 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.733 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.733 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.733 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.733 03:54:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.733 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.733 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74912760 kB' 'MemAvailable: 78454596 kB' 'Buffers: 3728 kB' 'Cached: 11545332 kB' 'SwapCached: 0 kB' 'Active: 8557140 kB' 'Inactive: 3492160 kB' 'Active(anon): 7987944 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503560 kB' 'Mapped: 184304 kB' 'Shmem: 7487704 kB' 'KReclaimable: 267748 kB' 'Slab: 894396 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626648 kB' 'KernelStack: 22400 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9296932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219936 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.733 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.733 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.734 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.734 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.735 03:54:23 -- setup/common.sh@33 -- # echo 0 00:03:08.735 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.735 03:54:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.735 03:54:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.735 nr_hugepages=1024 00:03:08.735 03:54:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.735 resv_hugepages=0 00:03:08.735 03:54:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.735 surplus_hugepages=0 00:03:08.735 03:54:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.735 anon_hugepages=0 00:03:08.735 03:54:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.735 03:54:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.735 03:54:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.735 03:54:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.735 03:54:23 -- setup/common.sh@18 -- # local node= 00:03:08.735 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.735 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.735 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.735 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.735 03:54:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.735 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.735 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74912760 kB' 'MemAvailable: 78454596 kB' 'Buffers: 3728 kB' 'Cached: 11545348 kB' 'SwapCached: 0 kB' 'Active: 8557056 kB' 'Inactive: 3492160 kB' 'Active(anon): 7987860 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503420 kB' 'Mapped: 184304 kB' 'Shmem: 7487720 kB' 'KReclaimable: 267748 kB' 'Slab: 894396 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626648 kB' 'KernelStack: 22384 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9296948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219936 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.735 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.735 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.736 03:54:23 -- setup/common.sh@33 -- # echo 1024 00:03:08.736 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.736 03:54:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.736 03:54:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.736 03:54:23 -- setup/hugepages.sh@27 -- # local node 00:03:08.736 03:54:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.736 03:54:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.736 03:54:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.736 03:54:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.736 03:54:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.736 03:54:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.736 03:54:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.736 03:54:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.736 03:54:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.736 03:54:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.736 03:54:23 -- setup/common.sh@18 -- # local node=0 00:03:08.736 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.736 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.736 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.736 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.736 03:54:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.736 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.736 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 42141040 kB' 'MemUsed: 5927356 kB' 'SwapCached: 0 kB' 'Active: 3702136 kB' 'Inactive: 119432 kB' 'Active(anon): 3341780 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390180 kB' 'Mapped: 184008 kB' 'AnonPages: 434604 kB' 'Shmem: 2910392 kB' 'KernelStack: 13192 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132288 kB' 'Slab: 425964 kB' 'SReclaimable: 132288 kB' 'SUnreclaim: 293676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.736 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.736 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@33 -- # echo 0 00:03:08.737 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.737 03:54:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.737 03:54:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.737 03:54:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.737 03:54:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.737 03:54:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.737 03:54:23 -- setup/common.sh@18 -- # local node=1 00:03:08.737 03:54:23 -- setup/common.sh@19 -- # local var val 00:03:08.737 03:54:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.737 03:54:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.737 03:54:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.737 03:54:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.737 03:54:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.737 03:54:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218160 kB' 'MemFree: 32772496 kB' 'MemUsed: 11445664 kB' 'SwapCached: 0 kB' 'Active: 4854680 kB' 'Inactive: 3372728 kB' 'Active(anon): 4645840 kB' 'Inactive(anon): 0 kB' 'Active(file): 208840 kB' 'Inactive(file): 3372728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8158920 kB' 'Mapped: 296 kB' 'AnonPages: 68552 kB' 'Shmem: 4577352 kB' 'KernelStack: 9192 kB' 'PageTables: 3088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135460 kB' 'Slab: 468432 kB' 'SReclaimable: 135460 kB' 'SUnreclaim: 332972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.737 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.737 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # continue 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.738 03:54:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.738 03:54:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.738 03:54:23 -- setup/common.sh@33 -- # echo 0 00:03:08.738 03:54:23 -- setup/common.sh@33 -- # return 0 00:03:08.738 03:54:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.738 03:54:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.738 03:54:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.738 03:54:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.738 03:54:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.738 node0=512 expecting 512 00:03:08.738 03:54:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.738 03:54:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.738 03:54:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.738 03:54:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:08.738 node1=512 expecting 512 00:03:08.738 03:54:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:08.738 00:03:08.738 real 0m3.081s 00:03:08.738 user 0m1.239s 00:03:08.738 sys 0m1.884s 00:03:08.738 03:54:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.738 03:54:23 -- common/autotest_common.sh@10 -- # set +x 00:03:08.738 ************************************ 00:03:08.738 END TEST even_2G_alloc 00:03:08.738 ************************************ 00:03:08.997 03:54:23 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:08.997 03:54:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.997 03:54:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.997 03:54:23 -- common/autotest_common.sh@10 -- # set +x 00:03:08.997 ************************************ 00:03:08.997 START TEST odd_alloc 00:03:08.997 ************************************ 00:03:08.997 03:54:23 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:08.997 03:54:23 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:08.997 03:54:23 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:08.997 03:54:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:08.997 03:54:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.997 03:54:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.997 03:54:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.997 03:54:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:08.997 03:54:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.997 03:54:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.997 03:54:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.997 03:54:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.997 03:54:23 -- setup/hugepages.sh@83 -- # : 513 00:03:08.997 03:54:23 -- setup/hugepages.sh@84 -- # : 1 00:03:08.997 03:54:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:08.997 03:54:23 -- setup/hugepages.sh@83 -- # : 0 00:03:08.997 03:54:23 -- setup/hugepages.sh@84 -- # : 0 00:03:08.997 03:54:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.997 03:54:23 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:08.997 03:54:23 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:08.997 03:54:23 -- setup/hugepages.sh@160 -- # setup output 00:03:08.997 03:54:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.997 03:54:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.656 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.656 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.656 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.918 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.918 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.918 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.918 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.918 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.918 03:54:26 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:11.918 03:54:26 -- setup/hugepages.sh@89 -- # local node 00:03:11.918 03:54:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.918 03:54:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.918 03:54:26 -- setup/hugepages.sh@92 -- # local surp 00:03:11.918 03:54:26 -- setup/hugepages.sh@93 -- # local resv 00:03:11.918 03:54:26 -- setup/hugepages.sh@94 -- # local anon 00:03:11.918 03:54:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.918 03:54:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.918 03:54:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.918 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:11.918 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:11.918 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.918 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.918 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.918 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.918 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.918 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74906716 kB' 'MemAvailable: 78448552 kB' 'Buffers: 3728 kB' 'Cached: 11545436 kB' 'SwapCached: 0 kB' 'Active: 8559156 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989960 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505608 kB' 'Mapped: 184356 kB' 'Shmem: 7487808 kB' 'KReclaimable: 267748 kB' 'Slab: 894588 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626840 kB' 'KernelStack: 22400 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 9297632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219856 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.919 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.919 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.920 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:11.920 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:11.920 03:54:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:11.920 03:54:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.920 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.920 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:11.920 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:11.920 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.920 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.920 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.920 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.920 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.920 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.920 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74912976 kB' 'MemAvailable: 78454812 kB' 'Buffers: 3728 kB' 'Cached: 11545444 kB' 'SwapCached: 0 kB' 'Active: 8558444 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989248 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504788 kB' 'Mapped: 184336 kB' 'Shmem: 7487816 kB' 'KReclaimable: 267748 kB' 'Slab: 894564 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626816 kB' 'KernelStack: 22352 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 9297648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219840 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.920 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.920 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.921 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:11.921 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:11.921 03:54:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:11.921 03:54:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.921 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.921 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:11.921 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:11.921 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.921 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.921 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.921 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.921 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.921 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.921 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74912188 kB' 'MemAvailable: 78454024 kB' 'Buffers: 3728 kB' 'Cached: 11545452 kB' 'SwapCached: 0 kB' 'Active: 8558608 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989412 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504888 kB' 'Mapped: 184336 kB' 'Shmem: 7487824 kB' 'KReclaimable: 267748 kB' 'Slab: 894592 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626844 kB' 'KernelStack: 22352 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 9297664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219840 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.921 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.921 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.922 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.922 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.923 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:11.923 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:11.923 03:54:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:11.923 03:54:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:11.923 nr_hugepages=1025 00:03:11.923 03:54:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.923 resv_hugepages=0 00:03:11.923 03:54:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.923 surplus_hugepages=0 00:03:11.923 03:54:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.923 anon_hugepages=0 00:03:11.923 03:54:26 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.923 03:54:26 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:11.923 03:54:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.923 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.923 03:54:26 -- setup/common.sh@18 -- # local node= 00:03:11.923 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:11.923 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:11.923 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.923 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.923 03:54:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.923 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.923 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74912440 kB' 'MemAvailable: 78454276 kB' 'Buffers: 3728 kB' 'Cached: 11545480 kB' 'SwapCached: 0 kB' 'Active: 8558752 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989556 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505032 kB' 'Mapped: 184336 kB' 'Shmem: 7487852 kB' 'KReclaimable: 267748 kB' 'Slab: 894592 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 626844 kB' 'KernelStack: 22384 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482280 kB' 'Committed_AS: 9298048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219856 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # continue 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:11.923 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:11.923 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.185 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.185 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.186 03:54:26 -- setup/common.sh@33 -- # echo 1025 00:03:12.186 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.186 03:54:26 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:12.186 03:54:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.186 03:54:26 -- setup/hugepages.sh@27 -- # local node 00:03:12.186 03:54:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.186 03:54:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:12.186 03:54:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.186 03:54:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:12.186 03:54:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.186 03:54:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.186 03:54:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.186 03:54:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.186 03:54:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.186 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.186 03:54:26 -- setup/common.sh@18 -- # local node=0 00:03:12.186 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:12.186 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.186 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.186 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.186 03:54:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.186 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.186 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 42136216 kB' 'MemUsed: 5932180 kB' 'SwapCached: 0 kB' 'Active: 3704220 kB' 'Inactive: 119432 kB' 'Active(anon): 3343864 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390296 kB' 'Mapped: 184016 kB' 'AnonPages: 436608 kB' 'Shmem: 2910508 kB' 'KernelStack: 13224 kB' 'PageTables: 5368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132288 kB' 'Slab: 426072 kB' 'SReclaimable: 132288 kB' 'SUnreclaim: 293784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.186 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.186 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:12.187 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.187 03:54:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.187 03:54:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.187 03:54:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.187 03:54:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:12.187 03:54:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.187 03:54:26 -- setup/common.sh@18 -- # local node=1 00:03:12.187 03:54:26 -- setup/common.sh@19 -- # local var val 00:03:12.187 03:54:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.187 03:54:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.187 03:54:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:12.187 03:54:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:12.187 03:54:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.187 03:54:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218160 kB' 'MemFree: 32776612 kB' 'MemUsed: 11441548 kB' 'SwapCached: 0 kB' 'Active: 4854596 kB' 'Inactive: 3372728 kB' 'Active(anon): 4645756 kB' 'Inactive(anon): 0 kB' 'Active(file): 208840 kB' 'Inactive(file): 3372728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8158928 kB' 'Mapped: 320 kB' 'AnonPages: 68516 kB' 'Shmem: 4577360 kB' 'KernelStack: 9176 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135460 kB' 'Slab: 468520 kB' 'SReclaimable: 135460 kB' 'SUnreclaim: 333060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.187 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.187 03:54:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # continue 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.188 03:54:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.188 03:54:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.188 03:54:26 -- setup/common.sh@33 -- # echo 0 00:03:12.188 03:54:26 -- setup/common.sh@33 -- # return 0 00:03:12.188 03:54:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.188 03:54:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.188 03:54:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:12.188 node0=512 expecting 513 00:03:12.188 03:54:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.188 03:54:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.188 03:54:26 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:12.188 node1=513 expecting 512 00:03:12.188 03:54:26 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:12.188 00:03:12.188 real 0m3.083s 00:03:12.188 user 0m1.265s 00:03:12.188 sys 0m1.863s 00:03:12.188 03:54:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.188 03:54:26 -- common/autotest_common.sh@10 -- # set +x 00:03:12.188 ************************************ 00:03:12.188 END TEST odd_alloc 00:03:12.188 ************************************ 00:03:12.188 03:54:26 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:12.188 03:54:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.188 03:54:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.188 03:54:26 -- common/autotest_common.sh@10 -- # set +x 00:03:12.188 ************************************ 00:03:12.188 START TEST custom_alloc 00:03:12.188 ************************************ 00:03:12.188 03:54:26 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:12.188 03:54:26 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:12.188 03:54:26 -- setup/hugepages.sh@169 -- # local node 00:03:12.188 03:54:26 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:12.188 03:54:26 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:12.188 03:54:26 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:12.188 03:54:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:12.188 03:54:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:12.188 03:54:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.188 03:54:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:12.188 03:54:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.188 03:54:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:12.188 03:54:26 -- setup/hugepages.sh@83 -- # : 256 00:03:12.188 03:54:26 -- setup/hugepages.sh@84 -- # : 1 00:03:12.188 03:54:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:12.188 03:54:26 -- setup/hugepages.sh@83 -- # : 0 00:03:12.188 03:54:26 -- setup/hugepages.sh@84 -- # : 0 00:03:12.188 03:54:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:12.188 03:54:26 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:12.188 03:54:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.188 03:54:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.188 03:54:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.188 03:54:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.188 03:54:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.188 03:54:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:12.188 03:54:26 -- setup/hugepages.sh@78 -- # return 0 00:03:12.188 03:54:26 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:12.188 03:54:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:12.188 03:54:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:12.188 03:54:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.188 03:54:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.188 03:54:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.188 03:54:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.188 03:54:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:12.188 03:54:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:12.188 03:54:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:12.188 03:54:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:12.188 03:54:26 -- setup/hugepages.sh@78 -- # return 0 00:03:12.188 03:54:26 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:12.188 03:54:26 -- setup/hugepages.sh@187 -- # setup output 00:03:12.188 03:54:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.188 03:54:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.485 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.485 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.485 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.485 03:54:29 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:15.485 03:54:29 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:15.485 03:54:29 -- setup/hugepages.sh@89 -- # local node 00:03:15.485 03:54:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.485 03:54:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.485 03:54:29 -- setup/hugepages.sh@92 -- # local surp 00:03:15.485 03:54:29 -- setup/hugepages.sh@93 -- # local resv 00:03:15.485 03:54:29 -- setup/hugepages.sh@94 -- # local anon 00:03:15.485 03:54:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.485 03:54:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.485 03:54:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.485 03:54:29 -- setup/common.sh@18 -- # local node= 00:03:15.485 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.485 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.485 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.485 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.485 03:54:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.485 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.485 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.485 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 73887480 kB' 'MemAvailable: 77429316 kB' 'Buffers: 3728 kB' 'Cached: 11545572 kB' 'SwapCached: 0 kB' 'Active: 8560344 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991148 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505980 kB' 'Mapped: 184440 kB' 'Shmem: 7487944 kB' 'KReclaimable: 267748 kB' 'Slab: 894924 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627176 kB' 'KernelStack: 22432 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 9298512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219984 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:15.485 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.485 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.485 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.485 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.485 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.486 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.486 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.487 03:54:29 -- setup/common.sh@33 -- # echo 0 00:03:15.487 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.487 03:54:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.487 03:54:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.487 03:54:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.487 03:54:29 -- setup/common.sh@18 -- # local node= 00:03:15.487 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.487 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.487 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.487 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.487 03:54:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.487 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.487 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 73889464 kB' 'MemAvailable: 77431300 kB' 'Buffers: 3728 kB' 'Cached: 11545572 kB' 'SwapCached: 0 kB' 'Active: 8560512 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991316 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505632 kB' 'Mapped: 184440 kB' 'Shmem: 7487944 kB' 'KReclaimable: 267748 kB' 'Slab: 894888 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627140 kB' 'KernelStack: 22400 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 9298524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219936 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.487 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.487 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.488 03:54:29 -- setup/common.sh@33 -- # echo 0 00:03:15.488 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.488 03:54:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.488 03:54:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.488 03:54:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.488 03:54:29 -- setup/common.sh@18 -- # local node= 00:03:15.488 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.488 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.488 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.488 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.488 03:54:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.488 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.488 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 73890528 kB' 'MemAvailable: 77432364 kB' 'Buffers: 3728 kB' 'Cached: 11545584 kB' 'SwapCached: 0 kB' 'Active: 8559416 kB' 'Inactive: 3492160 kB' 'Active(anon): 7990220 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505488 kB' 'Mapped: 184360 kB' 'Shmem: 7487956 kB' 'KReclaimable: 267748 kB' 'Slab: 894860 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627112 kB' 'KernelStack: 22400 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 9298540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219920 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.488 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.488 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.489 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.489 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.490 03:54:29 -- setup/common.sh@33 -- # echo 0 00:03:15.490 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.490 03:54:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.490 03:54:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:15.490 nr_hugepages=1536 00:03:15.490 03:54:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.490 resv_hugepages=0 00:03:15.490 03:54:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.490 surplus_hugepages=0 00:03:15.490 03:54:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.490 anon_hugepages=0 00:03:15.490 03:54:29 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:15.490 03:54:29 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:15.490 03:54:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.490 03:54:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.490 03:54:29 -- setup/common.sh@18 -- # local node= 00:03:15.490 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.490 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.490 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.490 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.490 03:54:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.490 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.490 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 73889772 kB' 'MemAvailable: 77431608 kB' 'Buffers: 3728 kB' 'Cached: 11545612 kB' 'SwapCached: 0 kB' 'Active: 8559080 kB' 'Inactive: 3492160 kB' 'Active(anon): 7989884 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505092 kB' 'Mapped: 184360 kB' 'Shmem: 7487984 kB' 'KReclaimable: 267748 kB' 'Slab: 894860 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627112 kB' 'KernelStack: 22384 kB' 'PageTables: 8460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959016 kB' 'Committed_AS: 9298552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219936 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.490 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.490 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.491 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.491 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.491 03:54:29 -- setup/common.sh@33 -- # echo 1536 00:03:15.491 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.491 03:54:29 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:15.491 03:54:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.491 03:54:29 -- setup/hugepages.sh@27 -- # local node 00:03:15.491 03:54:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.491 03:54:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.491 03:54:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.491 03:54:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.491 03:54:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.491 03:54:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.491 03:54:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.491 03:54:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.491 03:54:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.491 03:54:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.491 03:54:29 -- setup/common.sh@18 -- # local node=0 00:03:15.492 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.492 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.492 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.492 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.492 03:54:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.492 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.492 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.492 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 42162924 kB' 'MemUsed: 5905472 kB' 'SwapCached: 0 kB' 'Active: 3704664 kB' 'Inactive: 119432 kB' 'Active(anon): 3344308 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390364 kB' 'Mapped: 184016 kB' 'AnonPages: 436908 kB' 'Shmem: 2910576 kB' 'KernelStack: 13224 kB' 'PageTables: 5420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132288 kB' 'Slab: 425892 kB' 'SReclaimable: 132288 kB' 'SUnreclaim: 293604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.492 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.492 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@33 -- # echo 0 00:03:15.493 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.493 03:54:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.493 03:54:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.493 03:54:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.493 03:54:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.493 03:54:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.493 03:54:29 -- setup/common.sh@18 -- # local node=1 00:03:15.493 03:54:29 -- setup/common.sh@19 -- # local var val 00:03:15.493 03:54:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.493 03:54:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.493 03:54:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.493 03:54:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.493 03:54:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.493 03:54:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218160 kB' 'MemFree: 31726848 kB' 'MemUsed: 12491312 kB' 'SwapCached: 0 kB' 'Active: 4854724 kB' 'Inactive: 3372728 kB' 'Active(anon): 4645884 kB' 'Inactive(anon): 0 kB' 'Active(file): 208840 kB' 'Inactive(file): 3372728 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8158992 kB' 'Mapped: 344 kB' 'AnonPages: 68468 kB' 'Shmem: 4577424 kB' 'KernelStack: 9160 kB' 'PageTables: 3040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135460 kB' 'Slab: 468968 kB' 'SReclaimable: 135460 kB' 'SUnreclaim: 333508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.493 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.493 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # continue 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.494 03:54:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.494 03:54:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.494 03:54:29 -- setup/common.sh@33 -- # echo 0 00:03:15.494 03:54:29 -- setup/common.sh@33 -- # return 0 00:03:15.494 03:54:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.494 03:54:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.494 03:54:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.494 03:54:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.494 03:54:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.494 node0=512 expecting 512 00:03:15.494 03:54:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.494 03:54:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.494 03:54:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.494 03:54:29 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:15.494 node1=1024 expecting 1024 00:03:15.494 03:54:29 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:15.494 00:03:15.494 real 0m3.121s 00:03:15.494 user 0m1.240s 00:03:15.494 sys 0m1.925s 00:03:15.494 03:54:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.494 03:54:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.494 ************************************ 00:03:15.494 END TEST custom_alloc 00:03:15.494 ************************************ 00:03:15.494 03:54:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:15.494 03:54:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.494 03:54:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.494 03:54:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.494 ************************************ 00:03:15.494 START TEST no_shrink_alloc 00:03:15.494 ************************************ 00:03:15.494 03:54:29 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:15.494 03:54:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:15.494 03:54:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.494 03:54:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:15.494 03:54:29 -- setup/hugepages.sh@51 -- # shift 00:03:15.494 03:54:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:15.494 03:54:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.494 03:54:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.494 03:54:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.494 03:54:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:15.494 03:54:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:15.494 03:54:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.494 03:54:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.494 03:54:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.494 03:54:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.494 03:54:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.494 03:54:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:15.494 03:54:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.494 03:54:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:15.494 03:54:29 -- setup/hugepages.sh@73 -- # return 0 00:03:15.494 03:54:29 -- setup/hugepages.sh@198 -- # setup output 00:03:15.494 03:54:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.494 03:54:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.800 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.800 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:18.800 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:18.800 03:54:32 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:18.800 03:54:32 -- setup/hugepages.sh@89 -- # local node 00:03:18.800 03:54:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.800 03:54:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.800 03:54:32 -- setup/hugepages.sh@92 -- # local surp 00:03:18.800 03:54:32 -- setup/hugepages.sh@93 -- # local resv 00:03:18.800 03:54:32 -- setup/hugepages.sh@94 -- # local anon 00:03:18.800 03:54:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.800 03:54:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.800 03:54:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.800 03:54:32 -- setup/common.sh@18 -- # local node= 00:03:18.800 03:54:32 -- setup/common.sh@19 -- # local var val 00:03:18.800 03:54:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.800 03:54:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.800 03:54:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.800 03:54:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.800 03:54:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.800 03:54:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74853652 kB' 'MemAvailable: 78395488 kB' 'Buffers: 3728 kB' 'Cached: 11545700 kB' 'SwapCached: 0 kB' 'Active: 8560368 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991172 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506088 kB' 'Mapped: 184428 kB' 'Shmem: 7488072 kB' 'KReclaimable: 267748 kB' 'Slab: 896024 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 628276 kB' 'KernelStack: 22576 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220160 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.800 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.800 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.801 03:54:32 -- setup/common.sh@33 -- # echo 0 00:03:18.801 03:54:32 -- setup/common.sh@33 -- # return 0 00:03:18.801 03:54:32 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.801 03:54:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.801 03:54:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.801 03:54:32 -- setup/common.sh@18 -- # local node= 00:03:18.801 03:54:32 -- setup/common.sh@19 -- # local var val 00:03:18.801 03:54:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.801 03:54:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.801 03:54:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.801 03:54:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.801 03:54:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.801 03:54:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74860028 kB' 'MemAvailable: 78401864 kB' 'Buffers: 3728 kB' 'Cached: 11545704 kB' 'SwapCached: 0 kB' 'Active: 8560224 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991028 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506188 kB' 'Mapped: 184396 kB' 'Shmem: 7488076 kB' 'KReclaimable: 267748 kB' 'Slab: 896128 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 628380 kB' 'KernelStack: 22672 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220112 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.801 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.801 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.802 03:54:32 -- setup/common.sh@33 -- # echo 0 00:03:18.802 03:54:32 -- setup/common.sh@33 -- # return 0 00:03:18.802 03:54:32 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.802 03:54:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.802 03:54:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.802 03:54:32 -- setup/common.sh@18 -- # local node= 00:03:18.802 03:54:32 -- setup/common.sh@19 -- # local var val 00:03:18.802 03:54:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.802 03:54:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.802 03:54:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.802 03:54:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.802 03:54:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.802 03:54:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74860352 kB' 'MemAvailable: 78402188 kB' 'Buffers: 3728 kB' 'Cached: 11545716 kB' 'SwapCached: 0 kB' 'Active: 8560640 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991444 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506572 kB' 'Mapped: 184396 kB' 'Shmem: 7488088 kB' 'KReclaimable: 267748 kB' 'Slab: 896064 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 628316 kB' 'KernelStack: 22752 kB' 'PageTables: 9680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220128 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.802 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.802 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.803 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.803 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.803 03:54:32 -- setup/common.sh@33 -- # echo 0 00:03:18.803 03:54:32 -- setup/common.sh@33 -- # return 0 00:03:18.803 03:54:32 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.803 03:54:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.803 nr_hugepages=1024 00:03:18.804 03:54:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.804 resv_hugepages=0 00:03:18.804 03:54:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.804 surplus_hugepages=0 00:03:18.804 03:54:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.804 anon_hugepages=0 00:03:18.804 03:54:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.804 03:54:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.804 03:54:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.804 03:54:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.804 03:54:32 -- setup/common.sh@18 -- # local node= 00:03:18.804 03:54:32 -- setup/common.sh@19 -- # local var val 00:03:18.804 03:54:32 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.804 03:54:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.804 03:54:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.804 03:54:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.804 03:54:32 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.804 03:54:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74869368 kB' 'MemAvailable: 78411204 kB' 'Buffers: 3728 kB' 'Cached: 11545728 kB' 'SwapCached: 0 kB' 'Active: 8559860 kB' 'Inactive: 3492160 kB' 'Active(anon): 7990664 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505804 kB' 'Mapped: 184396 kB' 'Shmem: 7488100 kB' 'KReclaimable: 267748 kB' 'Slab: 896064 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 628316 kB' 'KernelStack: 22656 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220080 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.804 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.804 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:32 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.805 03:54:32 -- setup/common.sh@33 -- # echo 1024 00:03:18.805 03:54:32 -- setup/common.sh@33 -- # return 0 00:03:18.805 03:54:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.805 03:54:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.805 03:54:33 -- setup/hugepages.sh@27 -- # local node 00:03:18.805 03:54:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.805 03:54:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.805 03:54:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.805 03:54:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.805 03:54:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.805 03:54:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.805 03:54:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.805 03:54:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.805 03:54:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.805 03:54:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.805 03:54:33 -- setup/common.sh@18 -- # local node=0 00:03:18.805 03:54:33 -- setup/common.sh@19 -- # local var val 00:03:18.805 03:54:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.805 03:54:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.805 03:54:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.805 03:54:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.805 03:54:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.805 03:54:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41098800 kB' 'MemUsed: 6969596 kB' 'SwapCached: 0 kB' 'Active: 3705080 kB' 'Inactive: 119432 kB' 'Active(anon): 3344724 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390396 kB' 'Mapped: 184020 kB' 'AnonPages: 437008 kB' 'Shmem: 2910608 kB' 'KernelStack: 13528 kB' 'PageTables: 5836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132288 kB' 'Slab: 426208 kB' 'SReclaimable: 132288 kB' 'SUnreclaim: 293920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.805 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.805 03:54:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # continue 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.806 03:54:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.806 03:54:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.806 03:54:33 -- setup/common.sh@33 -- # echo 0 00:03:18.806 03:54:33 -- setup/common.sh@33 -- # return 0 00:03:18.806 03:54:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.806 03:54:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.806 03:54:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.806 03:54:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.806 03:54:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:18.806 node0=1024 expecting 1024 00:03:18.806 03:54:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:18.806 03:54:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:18.806 03:54:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:18.806 03:54:33 -- setup/hugepages.sh@202 -- # setup output 00:03:18.806 03:54:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.806 03:54:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.343 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.343 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.343 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.343 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:21.606 03:54:35 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:21.606 03:54:35 -- setup/hugepages.sh@89 -- # local node 00:03:21.606 03:54:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.606 03:54:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.606 03:54:35 -- setup/hugepages.sh@92 -- # local surp 00:03:21.606 03:54:35 -- setup/hugepages.sh@93 -- # local resv 00:03:21.606 03:54:35 -- setup/hugepages.sh@94 -- # local anon 00:03:21.606 03:54:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.606 03:54:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.606 03:54:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.606 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.606 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.606 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.606 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.606 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.606 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.606 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.606 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.606 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74854480 kB' 'MemAvailable: 78396316 kB' 'Buffers: 3728 kB' 'Cached: 11545796 kB' 'SwapCached: 0 kB' 'Active: 8562088 kB' 'Inactive: 3492160 kB' 'Active(anon): 7992892 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507528 kB' 'Mapped: 184416 kB' 'Shmem: 7488168 kB' 'KReclaimable: 267748 kB' 'Slab: 895400 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627652 kB' 'KernelStack: 22624 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220080 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.606 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.606 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.607 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.607 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.607 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.607 03:54:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.607 03:54:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.607 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.607 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.607 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.607 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.607 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.607 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.607 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.607 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.607 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.607 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74855140 kB' 'MemAvailable: 78396976 kB' 'Buffers: 3728 kB' 'Cached: 11545808 kB' 'SwapCached: 0 kB' 'Active: 8561368 kB' 'Inactive: 3492160 kB' 'Active(anon): 7992172 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507256 kB' 'Mapped: 184404 kB' 'Shmem: 7488180 kB' 'KReclaimable: 267748 kB' 'Slab: 895384 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627636 kB' 'KernelStack: 22528 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220032 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.608 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.608 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.609 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.609 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.609 03:54:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.609 03:54:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.609 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.609 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.609 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.609 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.609 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.609 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.609 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.609 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.609 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74860024 kB' 'MemAvailable: 78401860 kB' 'Buffers: 3728 kB' 'Cached: 11545812 kB' 'SwapCached: 0 kB' 'Active: 8561532 kB' 'Inactive: 3492160 kB' 'Active(anon): 7992336 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507448 kB' 'Mapped: 184404 kB' 'Shmem: 7488184 kB' 'KReclaimable: 267748 kB' 'Slab: 895432 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627684 kB' 'KernelStack: 22496 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220048 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.609 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.609 03:54:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.610 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.610 03:54:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.610 03:54:35 -- setup/common.sh@33 -- # echo 0 00:03:21.610 03:54:35 -- setup/common.sh@33 -- # return 0 00:03:21.610 03:54:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.610 03:54:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.610 nr_hugepages=1024 00:03:21.610 03:54:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.610 resv_hugepages=0 00:03:21.610 03:54:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.610 surplus_hugepages=0 00:03:21.610 03:54:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.610 anon_hugepages=0 00:03:21.610 03:54:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.610 03:54:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.610 03:54:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.610 03:54:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.610 03:54:35 -- setup/common.sh@18 -- # local node= 00:03:21.610 03:54:35 -- setup/common.sh@19 -- # local var val 00:03:21.610 03:54:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.611 03:54:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.611 03:54:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.611 03:54:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.611 03:54:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.611 03:54:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286556 kB' 'MemFree: 74862048 kB' 'MemAvailable: 78403884 kB' 'Buffers: 3728 kB' 'Cached: 11545812 kB' 'SwapCached: 0 kB' 'Active: 8561120 kB' 'Inactive: 3492160 kB' 'Active(anon): 7991924 kB' 'Inactive(anon): 0 kB' 'Active(file): 569196 kB' 'Inactive(file): 3492160 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507036 kB' 'Mapped: 184404 kB' 'Shmem: 7488184 kB' 'KReclaimable: 267748 kB' 'Slab: 895432 kB' 'SReclaimable: 267748 kB' 'SUnreclaim: 627684 kB' 'KernelStack: 22640 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483304 kB' 'Committed_AS: 9302504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220032 kB' 'VmallocChunk: 0 kB' 'Percpu: 95872 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3627988 kB' 'DirectMap2M: 23314432 kB' 'DirectMap1G: 74448896 kB' 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.611 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.611 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.612 03:54:36 -- setup/common.sh@33 -- # echo 1024 00:03:21.612 03:54:36 -- setup/common.sh@33 -- # return 0 00:03:21.612 03:54:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.612 03:54:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.612 03:54:36 -- setup/hugepages.sh@27 -- # local node 00:03:21.612 03:54:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.612 03:54:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.612 03:54:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.612 03:54:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.612 03:54:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.612 03:54:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.612 03:54:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.612 03:54:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.612 03:54:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.612 03:54:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.612 03:54:36 -- setup/common.sh@18 -- # local node=0 00:03:21.612 03:54:36 -- setup/common.sh@19 -- # local var val 00:03:21.612 03:54:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.612 03:54:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.612 03:54:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.612 03:54:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.612 03:54:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.612 03:54:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41096596 kB' 'MemUsed: 6971800 kB' 'SwapCached: 0 kB' 'Active: 3704276 kB' 'Inactive: 119432 kB' 'Active(anon): 3343920 kB' 'Inactive(anon): 0 kB' 'Active(file): 360356 kB' 'Inactive(file): 119432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3390412 kB' 'Mapped: 184020 kB' 'AnonPages: 436204 kB' 'Shmem: 2910624 kB' 'KernelStack: 13336 kB' 'PageTables: 5524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132288 kB' 'Slab: 425588 kB' 'SReclaimable: 132288 kB' 'SUnreclaim: 293300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.612 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.612 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # continue 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.613 03:54:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.613 03:54:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.613 03:54:36 -- setup/common.sh@33 -- # echo 0 00:03:21.613 03:54:36 -- setup/common.sh@33 -- # return 0 00:03:21.613 03:54:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.613 03:54:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.613 03:54:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.613 03:54:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.613 03:54:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.613 node0=1024 expecting 1024 00:03:21.613 03:54:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.613 00:03:21.613 real 0m6.081s 00:03:21.613 user 0m2.421s 00:03:21.613 sys 0m3.742s 00:03:21.613 03:54:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.613 03:54:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.613 ************************************ 00:03:21.613 END TEST no_shrink_alloc 00:03:21.613 ************************************ 00:03:21.613 03:54:36 -- setup/hugepages.sh@217 -- # clear_hp 00:03:21.613 03:54:36 -- setup/hugepages.sh@37 -- # local node hp 00:03:21.613 03:54:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.613 03:54:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.613 03:54:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:21.613 03:54:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.613 03:54:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:21.613 03:54:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:21.613 03:54:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.613 03:54:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:21.613 03:54:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:21.613 03:54:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:21.613 03:54:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:21.613 03:54:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:21.613 00:03:21.613 real 0m23.529s 00:03:21.613 user 0m9.089s 00:03:21.613 sys 0m13.837s 00:03:21.613 03:54:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.613 03:54:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.613 ************************************ 00:03:21.613 END TEST hugepages 00:03:21.613 ************************************ 00:03:21.873 03:54:36 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:21.873 03:54:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.873 03:54:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.873 03:54:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.873 ************************************ 00:03:21.873 START TEST driver 00:03:21.873 ************************************ 00:03:21.873 03:54:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:21.873 * Looking for test storage... 00:03:21.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.873 03:54:36 -- setup/driver.sh@68 -- # setup reset 00:03:21.873 03:54:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.873 03:54:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.068 03:54:40 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:26.068 03:54:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.068 03:54:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.068 03:54:40 -- common/autotest_common.sh@10 -- # set +x 00:03:26.328 ************************************ 00:03:26.328 START TEST guess_driver 00:03:26.328 ************************************ 00:03:26.328 03:54:40 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:26.328 03:54:40 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:26.328 03:54:40 -- setup/driver.sh@47 -- # local fail=0 00:03:26.328 03:54:40 -- setup/driver.sh@49 -- # pick_driver 00:03:26.328 03:54:40 -- setup/driver.sh@36 -- # vfio 00:03:26.328 03:54:40 -- setup/driver.sh@21 -- # local iommu_grups 00:03:26.328 03:54:40 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:26.328 03:54:40 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:26.328 03:54:40 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:26.328 03:54:40 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:26.328 03:54:40 -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:03:26.328 03:54:40 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:26.328 03:54:40 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:26.328 03:54:40 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:26.328 03:54:40 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:26.328 03:54:40 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:26.328 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:26.328 03:54:40 -- setup/driver.sh@30 -- # return 0 00:03:26.328 03:54:40 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:26.328 03:54:40 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:26.328 03:54:40 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:26.328 03:54:40 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:26.328 Looking for driver=vfio-pci 00:03:26.328 03:54:40 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.328 03:54:40 -- setup/driver.sh@45 -- # setup output config 00:03:26.328 03:54:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.328 03:54:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.864 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.864 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.864 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.864 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.864 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.864 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.864 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.864 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.864 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.122 03:54:43 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:29.122 03:54:43 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:29.122 03:54:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.134 03:54:44 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.134 03:54:44 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:30.134 03:54:44 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.134 03:54:44 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:30.134 03:54:44 -- setup/driver.sh@65 -- # setup reset 00:03:30.134 03:54:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.134 03:54:44 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.328 00:03:34.328 real 0m7.991s 00:03:34.328 user 0m2.291s 00:03:34.328 sys 0m4.092s 00:03:34.328 03:54:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.328 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 ************************************ 00:03:34.328 END TEST guess_driver 00:03:34.328 ************************************ 00:03:34.328 00:03:34.328 real 0m12.387s 00:03:34.328 user 0m3.526s 00:03:34.328 sys 0m6.392s 00:03:34.328 03:54:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.328 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 ************************************ 00:03:34.328 END TEST driver 00:03:34.328 ************************************ 00:03:34.328 03:54:48 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:34.328 03:54:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.328 03:54:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.328 03:54:48 -- common/autotest_common.sh@10 -- # set +x 00:03:34.328 ************************************ 00:03:34.328 START TEST devices 00:03:34.328 ************************************ 00:03:34.328 03:54:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:34.587 * Looking for test storage... 00:03:34.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:34.587 03:54:48 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:34.587 03:54:48 -- setup/devices.sh@192 -- # setup reset 00:03:34.587 03:54:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.587 03:54:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.877 03:54:52 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:37.877 03:54:52 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:37.877 03:54:52 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:37.877 03:54:52 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:37.877 03:54:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:37.877 03:54:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:37.877 03:54:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:37.877 03:54:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.877 03:54:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:37.877 03:54:52 -- setup/devices.sh@196 -- # blocks=() 00:03:37.877 03:54:52 -- setup/devices.sh@196 -- # declare -a blocks 00:03:37.877 03:54:52 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:37.877 03:54:52 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:37.877 03:54:52 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:37.877 03:54:52 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:37.877 03:54:52 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:37.877 03:54:52 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:37.877 03:54:52 -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:03:37.877 03:54:52 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:37.877 03:54:52 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:37.877 03:54:52 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:37.877 03:54:52 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:37.877 No valid GPT data, bailing 00:03:37.877 03:54:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.877 03:54:52 -- scripts/common.sh@391 -- # pt= 00:03:37.877 03:54:52 -- scripts/common.sh@392 -- # return 1 00:03:37.877 03:54:52 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:37.877 03:54:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:37.877 03:54:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:37.877 03:54:52 -- setup/common.sh@80 -- # echo 1000204886016 00:03:37.877 03:54:52 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:37.877 03:54:52 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:37.877 03:54:52 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:03:37.877 03:54:52 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:37.877 03:54:52 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:37.877 03:54:52 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:37.877 03:54:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.877 03:54:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.877 03:54:52 -- common/autotest_common.sh@10 -- # set +x 00:03:37.877 ************************************ 00:03:37.877 START TEST nvme_mount 00:03:37.877 ************************************ 00:03:37.877 03:54:52 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:37.877 03:54:52 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:37.877 03:54:52 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:37.877 03:54:52 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:37.877 03:54:52 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:37.877 03:54:52 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:37.877 03:54:52 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.877 03:54:52 -- setup/common.sh@40 -- # local part_no=1 00:03:37.877 03:54:52 -- setup/common.sh@41 -- # local size=1073741824 00:03:37.877 03:54:52 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.877 03:54:52 -- setup/common.sh@44 -- # parts=() 00:03:37.877 03:54:52 -- setup/common.sh@44 -- # local parts 00:03:37.877 03:54:52 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.877 03:54:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.877 03:54:52 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.877 03:54:52 -- setup/common.sh@46 -- # (( part++ )) 00:03:37.877 03:54:52 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.877 03:54:52 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:37.877 03:54:52 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.877 03:54:52 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:38.815 Creating new GPT entries in memory. 00:03:38.815 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.815 other utilities. 00:03:38.815 03:54:53 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.815 03:54:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.815 03:54:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.815 03:54:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.815 03:54:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.193 Creating new GPT entries in memory. 00:03:40.193 The operation has completed successfully. 00:03:40.193 03:54:54 -- setup/common.sh@57 -- # (( part++ )) 00:03:40.193 03:54:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.193 03:54:54 -- setup/common.sh@62 -- # wait 3616195 00:03:40.193 03:54:54 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.193 03:54:54 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:40.193 03:54:54 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.193 03:54:54 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:40.193 03:54:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:40.193 03:54:54 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.193 03:54:54 -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.193 03:54:54 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:40.193 03:54:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:40.193 03:54:54 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.193 03:54:54 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.193 03:54:54 -- setup/devices.sh@53 -- # local found=0 00:03:40.193 03:54:54 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.193 03:54:54 -- setup/devices.sh@56 -- # : 00:03:40.193 03:54:54 -- setup/devices.sh@59 -- # local pci status 00:03:40.193 03:54:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.193 03:54:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:40.193 03:54:54 -- setup/devices.sh@47 -- # setup output config 00:03:40.193 03:54:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.193 03:54:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:42.729 03:54:57 -- setup/devices.sh@63 -- # found=1 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.729 03:54:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.729 03:54:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:42.729 03:54:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.729 03:54:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.729 03:54:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.729 03:54:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:42.729 03:54:57 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.729 03:54:57 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.729 03:54:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.729 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.729 03:54:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.729 03:54:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.988 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:42.988 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:42.988 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.988 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:42.988 03:54:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:42.988 03:54:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:42.988 03:54:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.988 03:54:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:42.988 03:54:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.248 03:54:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.248 03:54:57 -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.248 03:54:57 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:43.248 03:54:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.248 03:54:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.248 03:54:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.248 03:54:57 -- setup/devices.sh@53 -- # local found=0 00:03:43.248 03:54:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.248 03:54:57 -- setup/devices.sh@56 -- # : 00:03:43.248 03:54:57 -- setup/devices.sh@59 -- # local pci status 00:03:43.248 03:54:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.248 03:54:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:43.248 03:54:57 -- setup/devices.sh@47 -- # setup output config 00:03:43.248 03:54:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.248 03:54:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:45.784 03:55:00 -- setup/devices.sh@63 -- # found=1 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.784 03:55:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:45.784 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.043 03:55:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.043 03:55:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:46.043 03:55:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.043 03:55:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.043 03:55:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.043 03:55:00 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.043 03:55:00 -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:03:46.044 03:55:00 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:46.044 03:55:00 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:46.044 03:55:00 -- setup/devices.sh@50 -- # local mount_point= 00:03:46.044 03:55:00 -- setup/devices.sh@51 -- # local test_file= 00:03:46.044 03:55:00 -- setup/devices.sh@53 -- # local found=0 00:03:46.044 03:55:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:46.044 03:55:00 -- setup/devices.sh@59 -- # local pci status 00:03:46.044 03:55:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.044 03:55:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:46.044 03:55:00 -- setup/devices.sh@47 -- # setup output config 00:03:46.044 03:55:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.044 03:55:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.644 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:48.645 03:55:03 -- setup/devices.sh@63 -- # found=1 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.645 03:55:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:48.645 03:55:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.904 03:55:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.904 03:55:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.904 03:55:03 -- setup/devices.sh@68 -- # return 0 00:03:48.904 03:55:03 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:48.904 03:55:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.904 03:55:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.904 03:55:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.904 03:55:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.904 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.904 00:03:48.904 real 0m11.012s 00:03:48.904 user 0m3.309s 00:03:48.904 sys 0m5.538s 00:03:48.904 03:55:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.904 03:55:03 -- common/autotest_common.sh@10 -- # set +x 00:03:48.904 ************************************ 00:03:48.904 END TEST nvme_mount 00:03:48.904 ************************************ 00:03:48.904 03:55:03 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:48.904 03:55:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.904 03:55:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.904 03:55:03 -- common/autotest_common.sh@10 -- # set +x 00:03:49.163 ************************************ 00:03:49.163 START TEST dm_mount 00:03:49.163 ************************************ 00:03:49.163 03:55:03 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:49.163 03:55:03 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:49.163 03:55:03 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:49.163 03:55:03 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:49.163 03:55:03 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:49.163 03:55:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.163 03:55:03 -- setup/common.sh@40 -- # local part_no=2 00:03:49.163 03:55:03 -- setup/common.sh@41 -- # local size=1073741824 00:03:49.163 03:55:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.163 03:55:03 -- setup/common.sh@44 -- # parts=() 00:03:49.163 03:55:03 -- setup/common.sh@44 -- # local parts 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.163 03:55:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part++ )) 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.163 03:55:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part++ )) 00:03:49.163 03:55:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.163 03:55:03 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.163 03:55:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.163 03:55:03 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:50.101 Creating new GPT entries in memory. 00:03:50.101 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.101 other utilities. 00:03:50.101 03:55:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.101 03:55:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.101 03:55:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.101 03:55:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.101 03:55:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.039 Creating new GPT entries in memory. 00:03:51.039 The operation has completed successfully. 00:03:51.039 03:55:05 -- setup/common.sh@57 -- # (( part++ )) 00:03:51.039 03:55:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.039 03:55:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.039 03:55:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.039 03:55:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:52.420 The operation has completed successfully. 00:03:52.420 03:55:06 -- setup/common.sh@57 -- # (( part++ )) 00:03:52.420 03:55:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.420 03:55:06 -- setup/common.sh@62 -- # wait 3620399 00:03:52.420 03:55:06 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:52.420 03:55:06 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.420 03:55:06 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.420 03:55:06 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:52.420 03:55:06 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:52.420 03:55:06 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.420 03:55:06 -- setup/devices.sh@161 -- # break 00:03:52.420 03:55:06 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.420 03:55:06 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:52.420 03:55:06 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:52.420 03:55:06 -- setup/devices.sh@166 -- # dm=dm-0 00:03:52.420 03:55:06 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:52.420 03:55:06 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:52.420 03:55:06 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.420 03:55:06 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:52.420 03:55:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.420 03:55:06 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.420 03:55:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:52.420 03:55:06 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.420 03:55:06 -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.420 03:55:06 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:52.420 03:55:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:52.420 03:55:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.420 03:55:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.420 03:55:06 -- setup/devices.sh@53 -- # local found=0 00:03:52.420 03:55:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:52.420 03:55:06 -- setup/devices.sh@56 -- # : 00:03:52.420 03:55:06 -- setup/devices.sh@59 -- # local pci status 00:03:52.420 03:55:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.420 03:55:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:52.420 03:55:06 -- setup/devices.sh@47 -- # setup output config 00:03:52.420 03:55:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.420 03:55:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.961 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.961 03:55:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:54.961 03:55:09 -- setup/devices.sh@63 -- # found=1 00:03:54.961 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.961 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.961 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.962 03:55:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:54.962 03:55:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.962 03:55:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:54.962 03:55:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:54.962 03:55:09 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.962 03:55:09 -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:54.962 03:55:09 -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:03:54.962 03:55:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:54.962 03:55:09 -- setup/devices.sh@50 -- # local mount_point= 00:03:54.962 03:55:09 -- setup/devices.sh@51 -- # local test_file= 00:03:54.962 03:55:09 -- setup/devices.sh@53 -- # local found=0 00:03:54.962 03:55:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.962 03:55:09 -- setup/devices.sh@59 -- # local pci status 00:03:54.962 03:55:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.962 03:55:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:03:54.962 03:55:09 -- setup/devices.sh@47 -- # setup output config 00:03:54.962 03:55:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.962 03:55:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.254 03:55:12 -- setup/devices.sh@63 -- # found=1 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.254 03:55:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.254 03:55:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:58.254 03:55:12 -- setup/devices.sh@68 -- # return 0 00:03:58.254 03:55:12 -- setup/devices.sh@187 -- # cleanup_dm 00:03:58.254 03:55:12 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.254 03:55:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.254 03:55:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:58.254 03:55:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:58.254 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:58.254 03:55:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:58.254 00:03:58.254 real 0m8.805s 00:03:58.254 user 0m2.057s 00:03:58.254 sys 0m3.768s 00:03:58.254 03:55:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:58.254 03:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.254 ************************************ 00:03:58.254 END TEST dm_mount 00:03:58.254 ************************************ 00:03:58.254 03:55:12 -- setup/devices.sh@1 -- # cleanup 00:03:58.254 03:55:12 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:58.254 03:55:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.254 03:55:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:58.254 03:55:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:58.254 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:58.254 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:58.254 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:58.254 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:58.254 03:55:12 -- setup/devices.sh@12 -- # cleanup_dm 00:03:58.254 03:55:12 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.254 03:55:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:58.254 03:55:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:58.254 03:55:12 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:58.254 00:03:58.254 real 0m23.755s 00:03:58.254 user 0m6.776s 00:03:58.254 sys 0m11.679s 00:03:58.254 03:55:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:58.254 03:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.254 ************************************ 00:03:58.254 END TEST devices 00:03:58.254 ************************************ 00:03:58.254 00:03:58.254 real 1m20.974s 00:03:58.254 user 0m26.452s 00:03:58.254 sys 0m44.591s 00:03:58.254 03:55:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:58.254 03:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:58.254 ************************************ 00:03:58.254 END TEST setup.sh 00:03:58.254 ************************************ 00:03:58.255 03:55:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:01.544 Hugepages 00:04:01.544 node hugesize free / total 00:04:01.544 node0 1048576kB 0 / 0 00:04:01.544 node0 2048kB 2048 / 2048 00:04:01.544 node1 1048576kB 0 / 0 00:04:01.544 node1 2048kB 0 / 0 00:04:01.544 00:04:01.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.544 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:01.544 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:01.544 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:01.544 03:55:15 -- spdk/autotest.sh@130 -- # uname -s 00:04:01.544 03:55:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:01.544 03:55:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:01.544 03:55:15 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.080 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:04.080 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.018 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.018 03:55:19 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:05.955 03:55:20 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:05.955 03:55:20 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:05.955 03:55:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.956 03:55:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:05.956 03:55:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:05.956 03:55:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:05.956 03:55:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.956 03:55:20 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.956 03:55:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:05.956 03:55:20 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:05.956 03:55:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:86:00.0 00:04:05.956 03:55:20 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.248 Waiting for block devices as requested 00:04:09.248 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:09.248 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:09.248 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.507 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.507 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.508 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.767 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.767 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.767 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.767 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.026 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.026 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:10.026 03:55:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.026 03:55:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1488 -- # grep 0000:86:00.0/nvme/nvme 00:04:10.026 03:55:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:10.026 03:55:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:10.026 03:55:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.026 03:55:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.026 03:55:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:10.026 03:55:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.026 03:55:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.026 03:55:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:10.026 03:55:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.026 03:55:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.026 03:55:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.026 03:55:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.026 03:55:24 -- common/autotest_common.sh@1543 -- # continue 00:04:10.026 03:55:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:10.026 03:55:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:10.026 03:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:10.285 03:55:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:10.285 03:55:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:10.285 03:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:10.285 03:55:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.820 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.820 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.820 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.820 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.079 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.052 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.052 03:55:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:14.052 03:55:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:14.052 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.052 03:55:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:14.052 03:55:28 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:14.052 03:55:28 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.052 03:55:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:14.052 03:55:28 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:14.052 03:55:28 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:14.052 03:55:28 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:14.052 03:55:28 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:14.052 03:55:28 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.052 03:55:28 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.052 03:55:28 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:14.311 03:55:28 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:14.311 03:55:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:86:00.0 00:04:14.311 03:55:28 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:14.311 03:55:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:14.311 03:55:28 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:14.311 03:55:28 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:14.311 03:55:28 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:14.311 03:55:28 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:86:00.0 00:04:14.311 03:55:28 -- common/autotest_common.sh@1578 -- # [[ -z 0000:86:00.0 ]] 00:04:14.311 03:55:28 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3629723 00:04:14.311 03:55:28 -- common/autotest_common.sh@1584 -- # waitforlisten 3629723 00:04:14.311 03:55:28 -- common/autotest_common.sh@817 -- # '[' -z 3629723 ']' 00:04:14.311 03:55:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.311 03:55:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:14.311 03:55:28 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.311 03:55:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.311 03:55:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:14.311 03:55:28 -- common/autotest_common.sh@10 -- # set +x 00:04:14.311 [2024-04-19 03:55:28.653519] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:14.312 [2024-04-19 03:55:28.653580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629723 ] 00:04:14.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.312 [2024-04-19 03:55:28.734682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.312 [2024-04-19 03:55:28.827605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.248 03:55:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:15.248 03:55:29 -- common/autotest_common.sh@850 -- # return 0 00:04:15.248 03:55:29 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:15.248 03:55:29 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:15.248 03:55:29 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:04:18.537 nvme0n1 00:04:18.537 03:55:32 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:18.537 [2024-04-19 03:55:32.887880] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:18.537 request: 00:04:18.537 { 00:04:18.537 "nvme_ctrlr_name": "nvme0", 00:04:18.537 "password": "test", 00:04:18.537 "method": "bdev_nvme_opal_revert", 00:04:18.537 "req_id": 1 00:04:18.537 } 00:04:18.537 Got JSON-RPC error response 00:04:18.537 response: 00:04:18.537 { 00:04:18.537 "code": -32602, 00:04:18.537 "message": "Invalid parameters" 00:04:18.537 } 00:04:18.537 03:55:32 -- common/autotest_common.sh@1590 -- # true 00:04:18.537 03:55:32 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:18.537 03:55:32 -- common/autotest_common.sh@1594 -- # killprocess 3629723 00:04:18.537 03:55:32 -- common/autotest_common.sh@936 -- # '[' -z 3629723 ']' 00:04:18.537 03:55:32 -- common/autotest_common.sh@940 -- # kill -0 3629723 00:04:18.537 03:55:32 -- common/autotest_common.sh@941 -- # uname 00:04:18.537 03:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.537 03:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3629723 00:04:18.537 03:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.537 03:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.537 03:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3629723' 00:04:18.537 killing process with pid 3629723 00:04:18.537 03:55:32 -- common/autotest_common.sh@955 -- # kill 3629723 00:04:18.537 03:55:32 -- common/autotest_common.sh@960 -- # wait 3629723 00:04:20.441 03:55:34 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:20.441 03:55:34 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:20.441 03:55:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.441 03:55:34 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:20.441 03:55:34 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:20.441 03:55:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:20.441 03:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.441 03:55:34 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.441 03:55:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.441 03:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.441 03:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.441 ************************************ 00:04:20.441 START TEST env 00:04:20.441 ************************************ 00:04:20.441 03:55:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.441 * Looking for test storage... 00:04:20.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.441 03:55:34 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.441 03:55:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.441 03:55:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.441 03:55:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.700 ************************************ 00:04:20.700 START TEST env_memory 00:04:20.700 ************************************ 00:04:20.700 03:55:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.700 00:04:20.700 00:04:20.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.700 http://cunit.sourceforge.net/ 00:04:20.700 00:04:20.700 00:04:20.700 Suite: memory 00:04:20.700 Test: alloc and free memory map ...[2024-04-19 03:55:35.093717] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.700 passed 00:04:20.700 Test: mem map translation ...[2024-04-19 03:55:35.122920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.700 [2024-04-19 03:55:35.122940] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.700 [2024-04-19 03:55:35.122997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.700 [2024-04-19 03:55:35.123006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.700 passed 00:04:20.700 Test: mem map registration ...[2024-04-19 03:55:35.183282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:20.700 [2024-04-19 03:55:35.183301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:20.700 passed 00:04:20.960 Test: mem map adjacent registrations ...passed 00:04:20.960 00:04:20.960 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.960 suites 1 1 n/a 0 0 00:04:20.960 tests 4 4 4 0 0 00:04:20.960 asserts 152 152 152 0 n/a 00:04:20.960 00:04:20.960 Elapsed time = 0.212 seconds 00:04:20.960 00:04:20.960 real 0m0.225s 00:04:20.960 user 0m0.215s 00:04:20.960 sys 0m0.009s 00:04:20.960 03:55:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:20.960 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.960 ************************************ 00:04:20.960 END TEST env_memory 00:04:20.960 ************************************ 00:04:20.960 03:55:35 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.960 03:55:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.960 03:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.960 03:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:20.960 ************************************ 00:04:20.960 START TEST env_vtophys 00:04:20.960 ************************************ 00:04:20.960 03:55:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.960 EAL: lib.eal log level changed from notice to debug 00:04:20.960 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.960 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.960 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.960 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.960 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.960 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.960 EAL: Detected lcore 6 as core 6 on socket 0 00:04:20.960 EAL: Detected lcore 7 as core 8 on socket 0 00:04:20.960 EAL: Detected lcore 8 as core 9 on socket 0 00:04:20.960 EAL: Detected lcore 9 as core 10 on socket 0 00:04:20.960 EAL: Detected lcore 10 as core 11 on socket 0 00:04:20.960 EAL: Detected lcore 11 as core 12 on socket 0 00:04:20.960 EAL: Detected lcore 12 as core 13 on socket 0 00:04:20.960 EAL: Detected lcore 13 as core 14 on socket 0 00:04:20.960 EAL: Detected lcore 14 as core 16 on socket 0 00:04:20.960 EAL: Detected lcore 15 as core 17 on socket 0 00:04:20.960 EAL: Detected lcore 16 as core 18 on socket 0 00:04:20.960 EAL: Detected lcore 17 as core 19 on socket 0 00:04:20.960 EAL: Detected lcore 18 as core 20 on socket 0 00:04:20.960 EAL: Detected lcore 19 as core 21 on socket 0 00:04:20.960 EAL: Detected lcore 20 as core 22 on socket 0 00:04:20.960 EAL: Detected lcore 21 as core 24 on socket 0 00:04:20.960 EAL: Detected lcore 22 as core 25 on socket 0 00:04:20.960 EAL: Detected lcore 23 as core 26 on socket 0 00:04:20.960 EAL: Detected lcore 24 as core 27 on socket 0 00:04:20.960 EAL: Detected lcore 25 as core 28 on socket 0 00:04:20.960 EAL: Detected lcore 26 as core 29 on socket 0 00:04:20.960 EAL: Detected lcore 27 as core 30 on socket 0 00:04:20.960 EAL: Detected lcore 28 as core 0 on socket 1 00:04:20.960 EAL: Detected lcore 29 as core 1 on socket 1 00:04:20.960 EAL: Detected lcore 30 as core 2 on socket 1 00:04:20.960 EAL: Detected lcore 31 as core 3 on socket 1 00:04:20.960 EAL: Detected lcore 32 as core 4 on socket 1 00:04:20.960 EAL: Detected lcore 33 as core 5 on socket 1 00:04:20.960 EAL: Detected lcore 34 as core 6 on socket 1 00:04:20.960 EAL: Detected lcore 35 as core 8 on socket 1 00:04:20.960 EAL: Detected lcore 36 as core 9 on socket 1 00:04:20.960 EAL: Detected lcore 37 as core 10 on socket 1 00:04:20.960 EAL: Detected lcore 38 as core 11 on socket 1 00:04:20.960 EAL: Detected lcore 39 as core 12 on socket 1 00:04:20.960 EAL: Detected lcore 40 as core 13 on socket 1 00:04:20.960 EAL: Detected lcore 41 as core 14 on socket 1 00:04:20.960 EAL: Detected lcore 42 as core 16 on socket 1 00:04:20.960 EAL: Detected lcore 43 as core 17 on socket 1 00:04:20.960 EAL: Detected lcore 44 as core 18 on socket 1 00:04:20.960 EAL: Detected lcore 45 as core 19 on socket 1 00:04:20.960 EAL: Detected lcore 46 as core 20 on socket 1 00:04:20.960 EAL: Detected lcore 47 as core 21 on socket 1 00:04:20.960 EAL: Detected lcore 48 as core 22 on socket 1 00:04:20.960 EAL: Detected lcore 49 as core 24 on socket 1 00:04:20.960 EAL: Detected lcore 50 as core 25 on socket 1 00:04:20.960 EAL: Detected lcore 51 as core 26 on socket 1 00:04:20.961 EAL: Detected lcore 52 as core 27 on socket 1 00:04:20.961 EAL: Detected lcore 53 as core 28 on socket 1 00:04:20.961 EAL: Detected lcore 54 as core 29 on socket 1 00:04:20.961 EAL: Detected lcore 55 as core 30 on socket 1 00:04:20.961 EAL: Detected lcore 56 as core 0 on socket 0 00:04:20.961 EAL: Detected lcore 57 as core 1 on socket 0 00:04:20.961 EAL: Detected lcore 58 as core 2 on socket 0 00:04:20.961 EAL: Detected lcore 59 as core 3 on socket 0 00:04:20.961 EAL: Detected lcore 60 as core 4 on socket 0 00:04:20.961 EAL: Detected lcore 61 as core 5 on socket 0 00:04:20.961 EAL: Detected lcore 62 as core 6 on socket 0 00:04:20.961 EAL: Detected lcore 63 as core 8 on socket 0 00:04:20.961 EAL: Detected lcore 64 as core 9 on socket 0 00:04:20.961 EAL: Detected lcore 65 as core 10 on socket 0 00:04:20.961 EAL: Detected lcore 66 as core 11 on socket 0 00:04:20.961 EAL: Detected lcore 67 as core 12 on socket 0 00:04:20.961 EAL: Detected lcore 68 as core 13 on socket 0 00:04:20.961 EAL: Detected lcore 69 as core 14 on socket 0 00:04:20.961 EAL: Detected lcore 70 as core 16 on socket 0 00:04:20.961 EAL: Detected lcore 71 as core 17 on socket 0 00:04:20.961 EAL: Detected lcore 72 as core 18 on socket 0 00:04:20.961 EAL: Detected lcore 73 as core 19 on socket 0 00:04:20.961 EAL: Detected lcore 74 as core 20 on socket 0 00:04:20.961 EAL: Detected lcore 75 as core 21 on socket 0 00:04:20.961 EAL: Detected lcore 76 as core 22 on socket 0 00:04:20.961 EAL: Detected lcore 77 as core 24 on socket 0 00:04:20.961 EAL: Detected lcore 78 as core 25 on socket 0 00:04:20.961 EAL: Detected lcore 79 as core 26 on socket 0 00:04:20.961 EAL: Detected lcore 80 as core 27 on socket 0 00:04:20.961 EAL: Detected lcore 81 as core 28 on socket 0 00:04:20.961 EAL: Detected lcore 82 as core 29 on socket 0 00:04:20.961 EAL: Detected lcore 83 as core 30 on socket 0 00:04:20.961 EAL: Detected lcore 84 as core 0 on socket 1 00:04:20.961 EAL: Detected lcore 85 as core 1 on socket 1 00:04:20.961 EAL: Detected lcore 86 as core 2 on socket 1 00:04:20.961 EAL: Detected lcore 87 as core 3 on socket 1 00:04:20.961 EAL: Detected lcore 88 as core 4 on socket 1 00:04:20.961 EAL: Detected lcore 89 as core 5 on socket 1 00:04:20.961 EAL: Detected lcore 90 as core 6 on socket 1 00:04:20.961 EAL: Detected lcore 91 as core 8 on socket 1 00:04:20.961 EAL: Detected lcore 92 as core 9 on socket 1 00:04:20.961 EAL: Detected lcore 93 as core 10 on socket 1 00:04:20.961 EAL: Detected lcore 94 as core 11 on socket 1 00:04:20.961 EAL: Detected lcore 95 as core 12 on socket 1 00:04:20.961 EAL: Detected lcore 96 as core 13 on socket 1 00:04:20.961 EAL: Detected lcore 97 as core 14 on socket 1 00:04:20.961 EAL: Detected lcore 98 as core 16 on socket 1 00:04:20.961 EAL: Detected lcore 99 as core 17 on socket 1 00:04:20.961 EAL: Detected lcore 100 as core 18 on socket 1 00:04:20.961 EAL: Detected lcore 101 as core 19 on socket 1 00:04:20.961 EAL: Detected lcore 102 as core 20 on socket 1 00:04:20.961 EAL: Detected lcore 103 as core 21 on socket 1 00:04:20.961 EAL: Detected lcore 104 as core 22 on socket 1 00:04:20.961 EAL: Detected lcore 105 as core 24 on socket 1 00:04:20.961 EAL: Detected lcore 106 as core 25 on socket 1 00:04:20.961 EAL: Detected lcore 107 as core 26 on socket 1 00:04:20.961 EAL: Detected lcore 108 as core 27 on socket 1 00:04:20.961 EAL: Detected lcore 109 as core 28 on socket 1 00:04:20.961 EAL: Detected lcore 110 as core 29 on socket 1 00:04:20.961 EAL: Detected lcore 111 as core 30 on socket 1 00:04:20.961 EAL: Maximum logical cores by configuration: 128 00:04:20.961 EAL: Detected CPU lcores: 112 00:04:20.961 EAL: Detected NUMA nodes: 2 00:04:20.961 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:20.961 EAL: Detected shared linkage of DPDK 00:04:20.961 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.221 EAL: Bus pci wants IOVA as 'DC' 00:04:21.221 EAL: Buses did not request a specific IOVA mode. 00:04:21.221 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:21.221 EAL: Selected IOVA mode 'VA' 00:04:21.221 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.221 EAL: Probing VFIO support... 00:04:21.221 EAL: IOMMU type 1 (Type 1) is supported 00:04:21.221 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:21.221 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:21.221 EAL: VFIO support initialized 00:04:21.221 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.221 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.221 EAL: Setting up physically contiguous memory... 00:04:21.221 EAL: Setting maximum number of open files to 524288 00:04:21.221 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.221 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:21.221 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.221 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:21.221 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.221 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:21.221 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.221 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.221 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:21.221 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:21.221 EAL: Hugepages will be freed exactly as allocated. 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: TSC frequency is ~2200000 KHz 00:04:21.221 EAL: Main lcore 0 is ready (tid=7f2cfc380a00;cpuset=[0]) 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 0 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.221 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.221 00:04:21.221 00:04:21.221 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.221 http://cunit.sourceforge.net/ 00:04:21.221 00:04:21.221 00:04:21.221 Suite: components_suite 00:04:21.221 Test: vtophys_malloc_test ...passed 00:04:21.221 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.221 EAL: Trying to obtain current memory policy. 00:04:21.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.221 EAL: Restoring previous memory policy: 4 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.221 EAL: request: mp_malloc_sync 00:04:21.221 EAL: No shared files mode enabled, IPC is disabled 00:04:21.221 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.481 EAL: request: mp_malloc_sync 00:04:21.481 EAL: No shared files mode enabled, IPC is disabled 00:04:21.481 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.481 EAL: Trying to obtain current memory policy. 00:04:21.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.481 EAL: Restoring previous memory policy: 4 00:04:21.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.481 EAL: request: mp_malloc_sync 00:04:21.481 EAL: No shared files mode enabled, IPC is disabled 00:04:21.481 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.740 EAL: request: mp_malloc_sync 00:04:21.741 EAL: No shared files mode enabled, IPC is disabled 00:04:21.741 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.741 EAL: Trying to obtain current memory policy. 00:04:21.741 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.000 EAL: Restoring previous memory policy: 4 00:04:22.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.000 EAL: request: mp_malloc_sync 00:04:22.000 EAL: No shared files mode enabled, IPC is disabled 00:04:22.000 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.259 EAL: request: mp_malloc_sync 00:04:22.259 EAL: No shared files mode enabled, IPC is disabled 00:04:22.259 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.259 passed 00:04:22.259 00:04:22.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.259 suites 1 1 n/a 0 0 00:04:22.259 tests 2 2 2 0 0 00:04:22.259 asserts 497 497 497 0 n/a 00:04:22.259 00:04:22.259 Elapsed time = 1.019 seconds 00:04:22.259 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.259 EAL: request: mp_malloc_sync 00:04:22.259 EAL: No shared files mode enabled, IPC is disabled 00:04:22.259 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.259 EAL: No shared files mode enabled, IPC is disabled 00:04:22.259 EAL: No shared files mode enabled, IPC is disabled 00:04:22.259 EAL: No shared files mode enabled, IPC is disabled 00:04:22.259 00:04:22.259 real 0m1.161s 00:04:22.259 user 0m0.675s 00:04:22.259 sys 0m0.453s 00:04:22.259 03:55:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.259 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.259 ************************************ 00:04:22.259 END TEST env_vtophys 00:04:22.259 ************************************ 00:04:22.259 03:55:36 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.259 03:55:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.259 03:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.259 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.259 ************************************ 00:04:22.259 START TEST env_pci 00:04:22.259 ************************************ 00:04:22.259 03:55:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.518 00:04:22.518 00:04:22.518 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.518 http://cunit.sourceforge.net/ 00:04:22.518 00:04:22.518 00:04:22.518 Suite: pci 00:04:22.518 Test: pci_hook ...[2024-04-19 03:55:36.793517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3631252 has claimed it 00:04:22.518 EAL: Cannot find device (10000:00:01.0) 00:04:22.518 EAL: Failed to attach device on primary process 00:04:22.518 passed 00:04:22.518 00:04:22.518 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.518 suites 1 1 n/a 0 0 00:04:22.518 tests 1 1 1 0 0 00:04:22.518 asserts 25 25 25 0 n/a 00:04:22.518 00:04:22.518 Elapsed time = 0.028 seconds 00:04:22.518 00:04:22.518 real 0m0.048s 00:04:22.518 user 0m0.016s 00:04:22.518 sys 0m0.032s 00:04:22.519 03:55:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.519 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.519 ************************************ 00:04:22.519 END TEST env_pci 00:04:22.519 ************************************ 00:04:22.519 03:55:36 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.519 03:55:36 -- env/env.sh@15 -- # uname 00:04:22.519 03:55:36 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.519 03:55:36 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.519 03:55:36 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.519 03:55:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:22.519 03:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.519 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:04:22.519 ************************************ 00:04:22.519 START TEST env_dpdk_post_init 00:04:22.519 ************************************ 00:04:22.519 03:55:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.519 EAL: Detected CPU lcores: 112 00:04:22.519 EAL: Detected NUMA nodes: 2 00:04:22.519 EAL: Detected shared linkage of DPDK 00:04:22.519 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.778 EAL: Selected IOVA mode 'VA' 00:04:22.778 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.778 EAL: VFIO support initialized 00:04:22.778 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.778 EAL: Using IOMMU type 1 (Type 1) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:22.778 EAL: Ignore mapping IO port bar(1) 00:04:22.778 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:23.037 EAL: Ignore mapping IO port bar(1) 00:04:23.037 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:23.037 EAL: Ignore mapping IO port bar(1) 00:04:23.037 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:23.037 EAL: Ignore mapping IO port bar(1) 00:04:23.037 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:23.037 EAL: Ignore mapping IO port bar(1) 00:04:23.037 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:23.604 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:04:26.891 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:04:26.891 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:04:27.150 Starting DPDK initialization... 00:04:27.150 Starting SPDK post initialization... 00:04:27.150 SPDK NVMe probe 00:04:27.150 Attaching to 0000:86:00.0 00:04:27.150 Attached to 0000:86:00.0 00:04:27.150 Cleaning up... 00:04:27.150 00:04:27.150 real 0m4.443s 00:04:27.150 user 0m3.361s 00:04:27.150 sys 0m0.148s 00:04:27.150 03:55:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.150 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.150 ************************************ 00:04:27.150 END TEST env_dpdk_post_init 00:04:27.150 ************************************ 00:04:27.150 03:55:41 -- env/env.sh@26 -- # uname 00:04:27.150 03:55:41 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.150 03:55:41 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.150 03:55:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.150 03:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.150 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.150 ************************************ 00:04:27.150 START TEST env_mem_callbacks 00:04:27.150 ************************************ 00:04:27.150 03:55:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.150 EAL: Detected CPU lcores: 112 00:04:27.150 EAL: Detected NUMA nodes: 2 00:04:27.150 EAL: Detected shared linkage of DPDK 00:04:27.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.150 EAL: Selected IOVA mode 'VA' 00:04:27.150 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.150 EAL: VFIO support initialized 00:04:27.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.150 00:04:27.150 00:04:27.150 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.150 http://cunit.sourceforge.net/ 00:04:27.150 00:04:27.150 00:04:27.150 Suite: memory 00:04:27.150 Test: test ... 00:04:27.150 register 0x200000200000 2097152 00:04:27.150 malloc 3145728 00:04:27.150 register 0x200000400000 4194304 00:04:27.150 buf 0x200000500000 len 3145728 PASSED 00:04:27.150 malloc 64 00:04:27.150 buf 0x2000004fff40 len 64 PASSED 00:04:27.150 malloc 4194304 00:04:27.150 register 0x200000800000 6291456 00:04:27.150 buf 0x200000a00000 len 4194304 PASSED 00:04:27.150 free 0x200000500000 3145728 00:04:27.150 free 0x2000004fff40 64 00:04:27.150 unregister 0x200000400000 4194304 PASSED 00:04:27.150 free 0x200000a00000 4194304 00:04:27.150 unregister 0x200000800000 6291456 PASSED 00:04:27.150 malloc 8388608 00:04:27.150 register 0x200000400000 10485760 00:04:27.150 buf 0x200000600000 len 8388608 PASSED 00:04:27.150 free 0x200000600000 8388608 00:04:27.150 unregister 0x200000400000 10485760 PASSED 00:04:27.409 passed 00:04:27.409 00:04:27.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.409 suites 1 1 n/a 0 0 00:04:27.409 tests 1 1 1 0 0 00:04:27.409 asserts 15 15 15 0 n/a 00:04:27.409 00:04:27.409 Elapsed time = 0.008 seconds 00:04:27.409 00:04:27.409 real 0m0.061s 00:04:27.409 user 0m0.017s 00:04:27.409 sys 0m0.044s 00:04:27.409 03:55:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.409 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.409 ************************************ 00:04:27.409 END TEST env_mem_callbacks 00:04:27.409 ************************************ 00:04:27.409 00:04:27.409 real 0m6.904s 00:04:27.409 user 0m4.629s 00:04:27.409 sys 0m1.257s 00:04:27.409 03:55:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.409 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.409 ************************************ 00:04:27.409 END TEST env 00:04:27.409 ************************************ 00:04:27.409 03:55:41 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.409 03:55:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.409 03:55:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.409 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.410 ************************************ 00:04:27.410 START TEST rpc 00:04:27.410 ************************************ 00:04:27.410 03:55:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:27.675 * Looking for test storage... 00:04:27.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.675 03:55:41 -- rpc/rpc.sh@65 -- # spdk_pid=3632437 00:04:27.675 03:55:41 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.675 03:55:41 -- rpc/rpc.sh@67 -- # waitforlisten 3632437 00:04:27.675 03:55:41 -- common/autotest_common.sh@817 -- # '[' -z 3632437 ']' 00:04:27.675 03:55:41 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:27.675 03:55:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.675 03:55:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.675 03:55:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.675 03:55:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.675 03:55:41 -- common/autotest_common.sh@10 -- # set +x 00:04:27.675 [2024-04-19 03:55:42.033043] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:27.675 [2024-04-19 03:55:42.033099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632437 ] 00:04:27.675 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.675 [2024-04-19 03:55:42.114102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.937 [2024-04-19 03:55:42.203151] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.937 [2024-04-19 03:55:42.203190] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3632437' to capture a snapshot of events at runtime. 00:04:27.937 [2024-04-19 03:55:42.203200] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:27.937 [2024-04-19 03:55:42.203209] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:27.937 [2024-04-19 03:55:42.203216] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3632437 for offline analysis/debug. 00:04:27.937 [2024-04-19 03:55:42.203243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.504 03:55:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.504 03:55:42 -- common/autotest_common.sh@850 -- # return 0 00:04:28.504 03:55:42 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.504 03:55:42 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.504 03:55:42 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.504 03:55:42 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.504 03:55:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.504 03:55:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.504 03:55:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 ************************************ 00:04:28.763 START TEST rpc_integrity 00:04:28.763 ************************************ 00:04:28.763 03:55:43 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:28.763 03:55:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.763 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.763 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.763 03:55:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.763 03:55:43 -- rpc/rpc.sh@13 -- # jq length 00:04:28.763 03:55:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.763 03:55:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.763 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.763 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.763 03:55:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.763 03:55:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.763 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.763 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.763 03:55:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.763 { 00:04:28.763 "name": "Malloc0", 00:04:28.763 "aliases": [ 00:04:28.763 "f10f7f23-08b5-4374-a2e6-c81f144b11ad" 00:04:28.763 ], 00:04:28.763 "product_name": "Malloc disk", 00:04:28.763 "block_size": 512, 00:04:28.763 "num_blocks": 16384, 00:04:28.763 "uuid": "f10f7f23-08b5-4374-a2e6-c81f144b11ad", 00:04:28.763 "assigned_rate_limits": { 00:04:28.763 "rw_ios_per_sec": 0, 00:04:28.763 "rw_mbytes_per_sec": 0, 00:04:28.763 "r_mbytes_per_sec": 0, 00:04:28.763 "w_mbytes_per_sec": 0 00:04:28.763 }, 00:04:28.763 "claimed": false, 00:04:28.763 "zoned": false, 00:04:28.763 "supported_io_types": { 00:04:28.763 "read": true, 00:04:28.763 "write": true, 00:04:28.763 "unmap": true, 00:04:28.763 "write_zeroes": true, 00:04:28.763 "flush": true, 00:04:28.763 "reset": true, 00:04:28.763 "compare": false, 00:04:28.763 "compare_and_write": false, 00:04:28.763 "abort": true, 00:04:28.763 "nvme_admin": false, 00:04:28.763 "nvme_io": false 00:04:28.763 }, 00:04:28.763 "memory_domains": [ 00:04:28.763 { 00:04:28.763 "dma_device_id": "system", 00:04:28.763 "dma_device_type": 1 00:04:28.763 }, 00:04:28.763 { 00:04:28.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.763 "dma_device_type": 2 00:04:28.763 } 00:04:28.763 ], 00:04:28.763 "driver_specific": {} 00:04:28.763 } 00:04:28.763 ]' 00:04:28.763 03:55:43 -- rpc/rpc.sh@17 -- # jq length 00:04:28.763 03:55:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.764 03:55:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.764 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.764 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.764 [2024-04-19 03:55:43.220279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.764 [2024-04-19 03:55:43.220316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.764 [2024-04-19 03:55:43.220332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2262700 00:04:28.764 [2024-04-19 03:55:43.220346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.764 [2024-04-19 03:55:43.221811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.764 [2024-04-19 03:55:43.221837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.764 Passthru0 00:04:28.764 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.764 03:55:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.764 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.764 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:28.764 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.764 03:55:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.764 { 00:04:28.764 "name": "Malloc0", 00:04:28.764 "aliases": [ 00:04:28.764 "f10f7f23-08b5-4374-a2e6-c81f144b11ad" 00:04:28.764 ], 00:04:28.764 "product_name": "Malloc disk", 00:04:28.764 "block_size": 512, 00:04:28.764 "num_blocks": 16384, 00:04:28.764 "uuid": "f10f7f23-08b5-4374-a2e6-c81f144b11ad", 00:04:28.764 "assigned_rate_limits": { 00:04:28.764 "rw_ios_per_sec": 0, 00:04:28.764 "rw_mbytes_per_sec": 0, 00:04:28.764 "r_mbytes_per_sec": 0, 00:04:28.764 "w_mbytes_per_sec": 0 00:04:28.764 }, 00:04:28.764 "claimed": true, 00:04:28.764 "claim_type": "exclusive_write", 00:04:28.764 "zoned": false, 00:04:28.764 "supported_io_types": { 00:04:28.764 "read": true, 00:04:28.764 "write": true, 00:04:28.764 "unmap": true, 00:04:28.764 "write_zeroes": true, 00:04:28.764 "flush": true, 00:04:28.764 "reset": true, 00:04:28.764 "compare": false, 00:04:28.764 "compare_and_write": false, 00:04:28.764 "abort": true, 00:04:28.764 "nvme_admin": false, 00:04:28.764 "nvme_io": false 00:04:28.764 }, 00:04:28.764 "memory_domains": [ 00:04:28.764 { 00:04:28.764 "dma_device_id": "system", 00:04:28.764 "dma_device_type": 1 00:04:28.764 }, 00:04:28.764 { 00:04:28.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.764 "dma_device_type": 2 00:04:28.764 } 00:04:28.764 ], 00:04:28.764 "driver_specific": {} 00:04:28.764 }, 00:04:28.764 { 00:04:28.764 "name": "Passthru0", 00:04:28.764 "aliases": [ 00:04:28.764 "34c25f8b-b590-5071-bd1f-81c7f7ad479c" 00:04:28.764 ], 00:04:28.764 "product_name": "passthru", 00:04:28.764 "block_size": 512, 00:04:28.764 "num_blocks": 16384, 00:04:28.764 "uuid": "34c25f8b-b590-5071-bd1f-81c7f7ad479c", 00:04:28.764 "assigned_rate_limits": { 00:04:28.764 "rw_ios_per_sec": 0, 00:04:28.764 "rw_mbytes_per_sec": 0, 00:04:28.764 "r_mbytes_per_sec": 0, 00:04:28.764 "w_mbytes_per_sec": 0 00:04:28.764 }, 00:04:28.764 "claimed": false, 00:04:28.764 "zoned": false, 00:04:28.764 "supported_io_types": { 00:04:28.764 "read": true, 00:04:28.764 "write": true, 00:04:28.764 "unmap": true, 00:04:28.764 "write_zeroes": true, 00:04:28.764 "flush": true, 00:04:28.764 "reset": true, 00:04:28.764 "compare": false, 00:04:28.764 "compare_and_write": false, 00:04:28.764 "abort": true, 00:04:28.764 "nvme_admin": false, 00:04:28.764 "nvme_io": false 00:04:28.764 }, 00:04:28.764 "memory_domains": [ 00:04:28.764 { 00:04:28.764 "dma_device_id": "system", 00:04:28.764 "dma_device_type": 1 00:04:28.764 }, 00:04:28.764 { 00:04:28.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.764 "dma_device_type": 2 00:04:28.764 } 00:04:28.764 ], 00:04:28.764 "driver_specific": { 00:04:28.764 "passthru": { 00:04:28.764 "name": "Passthru0", 00:04:28.764 "base_bdev_name": "Malloc0" 00:04:28.764 } 00:04:28.764 } 00:04:28.764 } 00:04:28.764 ]' 00:04:28.764 03:55:43 -- rpc/rpc.sh@21 -- # jq length 00:04:29.023 03:55:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.023 03:55:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.023 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.023 03:55:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.023 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.023 03:55:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.023 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.023 03:55:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.023 03:55:43 -- rpc/rpc.sh@26 -- # jq length 00:04:29.023 03:55:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.023 00:04:29.023 real 0m0.291s 00:04:29.023 user 0m0.183s 00:04:29.023 sys 0m0.037s 00:04:29.023 03:55:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 ************************************ 00:04:29.023 END TEST rpc_integrity 00:04:29.023 ************************************ 00:04:29.023 03:55:43 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:29.023 03:55:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.023 03:55:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.023 ************************************ 00:04:29.023 START TEST rpc_plugins 00:04:29.023 ************************************ 00:04:29.023 03:55:43 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:29.023 03:55:43 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:29.023 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.023 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.282 03:55:43 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:29.282 03:55:43 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:29.282 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.282 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.282 03:55:43 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.282 { 00:04:29.282 "name": "Malloc1", 00:04:29.282 "aliases": [ 00:04:29.282 "403b7b64-2aa0-4d21-8a16-f8272a6cb4a4" 00:04:29.282 ], 00:04:29.282 "product_name": "Malloc disk", 00:04:29.282 "block_size": 4096, 00:04:29.282 "num_blocks": 256, 00:04:29.282 "uuid": "403b7b64-2aa0-4d21-8a16-f8272a6cb4a4", 00:04:29.282 "assigned_rate_limits": { 00:04:29.282 "rw_ios_per_sec": 0, 00:04:29.282 "rw_mbytes_per_sec": 0, 00:04:29.282 "r_mbytes_per_sec": 0, 00:04:29.282 "w_mbytes_per_sec": 0 00:04:29.282 }, 00:04:29.282 "claimed": false, 00:04:29.282 "zoned": false, 00:04:29.282 "supported_io_types": { 00:04:29.282 "read": true, 00:04:29.282 "write": true, 00:04:29.282 "unmap": true, 00:04:29.282 "write_zeroes": true, 00:04:29.282 "flush": true, 00:04:29.282 "reset": true, 00:04:29.282 "compare": false, 00:04:29.282 "compare_and_write": false, 00:04:29.282 "abort": true, 00:04:29.282 "nvme_admin": false, 00:04:29.282 "nvme_io": false 00:04:29.282 }, 00:04:29.282 "memory_domains": [ 00:04:29.282 { 00:04:29.282 "dma_device_id": "system", 00:04:29.282 "dma_device_type": 1 00:04:29.282 }, 00:04:29.282 { 00:04:29.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.282 "dma_device_type": 2 00:04:29.282 } 00:04:29.282 ], 00:04:29.282 "driver_specific": {} 00:04:29.282 } 00:04:29.282 ]' 00:04:29.282 03:55:43 -- rpc/rpc.sh@32 -- # jq length 00:04:29.282 03:55:43 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.282 03:55:43 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.282 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.282 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.282 03:55:43 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.282 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.282 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.282 03:55:43 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.282 03:55:43 -- rpc/rpc.sh@36 -- # jq length 00:04:29.282 03:55:43 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.282 00:04:29.282 real 0m0.141s 00:04:29.282 user 0m0.091s 00:04:29.282 sys 0m0.016s 00:04:29.282 03:55:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.282 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.282 ************************************ 00:04:29.282 END TEST rpc_plugins 00:04:29.282 ************************************ 00:04:29.282 03:55:43 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.282 03:55:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.282 03:55:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.282 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.539 ************************************ 00:04:29.539 START TEST rpc_trace_cmd_test 00:04:29.539 ************************************ 00:04:29.539 03:55:43 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:29.539 03:55:43 -- rpc/rpc.sh@40 -- # local info 00:04:29.539 03:55:43 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.539 03:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.539 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.539 03:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.539 03:55:43 -- rpc/rpc.sh@42 -- # info='{ 00:04:29.539 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3632437", 00:04:29.539 "tpoint_group_mask": "0x8", 00:04:29.539 "iscsi_conn": { 00:04:29.539 "mask": "0x2", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "scsi": { 00:04:29.539 "mask": "0x4", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "bdev": { 00:04:29.539 "mask": "0x8", 00:04:29.539 "tpoint_mask": "0xffffffffffffffff" 00:04:29.539 }, 00:04:29.539 "nvmf_rdma": { 00:04:29.539 "mask": "0x10", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "nvmf_tcp": { 00:04:29.539 "mask": "0x20", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "ftl": { 00:04:29.539 "mask": "0x40", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "blobfs": { 00:04:29.539 "mask": "0x80", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "dsa": { 00:04:29.539 "mask": "0x200", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "thread": { 00:04:29.539 "mask": "0x400", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "nvme_pcie": { 00:04:29.539 "mask": "0x800", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "iaa": { 00:04:29.539 "mask": "0x1000", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "nvme_tcp": { 00:04:29.539 "mask": "0x2000", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "bdev_nvme": { 00:04:29.539 "mask": "0x4000", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 }, 00:04:29.539 "sock": { 00:04:29.539 "mask": "0x8000", 00:04:29.539 "tpoint_mask": "0x0" 00:04:29.539 } 00:04:29.539 }' 00:04:29.539 03:55:43 -- rpc/rpc.sh@43 -- # jq length 00:04:29.539 03:55:43 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:29.539 03:55:43 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.540 03:55:43 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.540 03:55:43 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.540 03:55:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.540 03:55:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.540 03:55:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.540 03:55:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.797 03:55:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.797 00:04:29.797 real 0m0.249s 00:04:29.797 user 0m0.220s 00:04:29.797 sys 0m0.022s 00:04:29.797 03:55:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.797 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.797 ************************************ 00:04:29.797 END TEST rpc_trace_cmd_test 00:04:29.797 ************************************ 00:04:29.797 03:55:44 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.797 03:55:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.797 03:55:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.797 03:55:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.797 03:55:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.797 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.797 ************************************ 00:04:29.797 START TEST rpc_daemon_integrity 00:04:29.797 ************************************ 00:04:29.797 03:55:44 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:29.797 03:55:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.797 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:29.797 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:29.797 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:29.797 03:55:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.797 03:55:44 -- rpc/rpc.sh@13 -- # jq length 00:04:30.056 03:55:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.056 03:55:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:30.056 03:55:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.056 { 00:04:30.056 "name": "Malloc2", 00:04:30.056 "aliases": [ 00:04:30.056 "ec15fd5f-31bf-49eb-8c69-cd1ec0144415" 00:04:30.056 ], 00:04:30.056 "product_name": "Malloc disk", 00:04:30.056 "block_size": 512, 00:04:30.056 "num_blocks": 16384, 00:04:30.056 "uuid": "ec15fd5f-31bf-49eb-8c69-cd1ec0144415", 00:04:30.056 "assigned_rate_limits": { 00:04:30.056 "rw_ios_per_sec": 0, 00:04:30.056 "rw_mbytes_per_sec": 0, 00:04:30.056 "r_mbytes_per_sec": 0, 00:04:30.056 "w_mbytes_per_sec": 0 00:04:30.056 }, 00:04:30.056 "claimed": false, 00:04:30.056 "zoned": false, 00:04:30.056 "supported_io_types": { 00:04:30.056 "read": true, 00:04:30.056 "write": true, 00:04:30.056 "unmap": true, 00:04:30.056 "write_zeroes": true, 00:04:30.056 "flush": true, 00:04:30.056 "reset": true, 00:04:30.056 "compare": false, 00:04:30.056 "compare_and_write": false, 00:04:30.056 "abort": true, 00:04:30.056 "nvme_admin": false, 00:04:30.056 "nvme_io": false 00:04:30.056 }, 00:04:30.056 "memory_domains": [ 00:04:30.056 { 00:04:30.056 "dma_device_id": "system", 00:04:30.056 "dma_device_type": 1 00:04:30.056 }, 00:04:30.056 { 00:04:30.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.056 "dma_device_type": 2 00:04:30.056 } 00:04:30.056 ], 00:04:30.056 "driver_specific": {} 00:04:30.056 } 00:04:30.056 ]' 00:04:30.056 03:55:44 -- rpc/rpc.sh@17 -- # jq length 00:04:30.056 03:55:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.056 03:55:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 [2024-04-19 03:55:44.411816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:30.056 [2024-04-19 03:55:44.411852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.056 [2024-04-19 03:55:44.411868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2266040 00:04:30.056 [2024-04-19 03:55:44.411877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.056 [2024-04-19 03:55:44.413262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.056 [2024-04-19 03:55:44.413286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.056 Passthru0 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.056 { 00:04:30.056 "name": "Malloc2", 00:04:30.056 "aliases": [ 00:04:30.056 "ec15fd5f-31bf-49eb-8c69-cd1ec0144415" 00:04:30.056 ], 00:04:30.056 "product_name": "Malloc disk", 00:04:30.056 "block_size": 512, 00:04:30.056 "num_blocks": 16384, 00:04:30.056 "uuid": "ec15fd5f-31bf-49eb-8c69-cd1ec0144415", 00:04:30.056 "assigned_rate_limits": { 00:04:30.056 "rw_ios_per_sec": 0, 00:04:30.056 "rw_mbytes_per_sec": 0, 00:04:30.056 "r_mbytes_per_sec": 0, 00:04:30.056 "w_mbytes_per_sec": 0 00:04:30.056 }, 00:04:30.056 "claimed": true, 00:04:30.056 "claim_type": "exclusive_write", 00:04:30.056 "zoned": false, 00:04:30.056 "supported_io_types": { 00:04:30.056 "read": true, 00:04:30.056 "write": true, 00:04:30.056 "unmap": true, 00:04:30.056 "write_zeroes": true, 00:04:30.056 "flush": true, 00:04:30.056 "reset": true, 00:04:30.056 "compare": false, 00:04:30.056 "compare_and_write": false, 00:04:30.056 "abort": true, 00:04:30.056 "nvme_admin": false, 00:04:30.056 "nvme_io": false 00:04:30.056 }, 00:04:30.056 "memory_domains": [ 00:04:30.056 { 00:04:30.056 "dma_device_id": "system", 00:04:30.056 "dma_device_type": 1 00:04:30.056 }, 00:04:30.056 { 00:04:30.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.056 "dma_device_type": 2 00:04:30.056 } 00:04:30.056 ], 00:04:30.056 "driver_specific": {} 00:04:30.056 }, 00:04:30.056 { 00:04:30.056 "name": "Passthru0", 00:04:30.056 "aliases": [ 00:04:30.056 "456ae18c-35ba-585e-b1ad-8c8e69e4d695" 00:04:30.056 ], 00:04:30.056 "product_name": "passthru", 00:04:30.056 "block_size": 512, 00:04:30.056 "num_blocks": 16384, 00:04:30.056 "uuid": "456ae18c-35ba-585e-b1ad-8c8e69e4d695", 00:04:30.056 "assigned_rate_limits": { 00:04:30.056 "rw_ios_per_sec": 0, 00:04:30.056 "rw_mbytes_per_sec": 0, 00:04:30.056 "r_mbytes_per_sec": 0, 00:04:30.056 "w_mbytes_per_sec": 0 00:04:30.056 }, 00:04:30.056 "claimed": false, 00:04:30.056 "zoned": false, 00:04:30.056 "supported_io_types": { 00:04:30.056 "read": true, 00:04:30.056 "write": true, 00:04:30.056 "unmap": true, 00:04:30.056 "write_zeroes": true, 00:04:30.056 "flush": true, 00:04:30.056 "reset": true, 00:04:30.056 "compare": false, 00:04:30.056 "compare_and_write": false, 00:04:30.056 "abort": true, 00:04:30.056 "nvme_admin": false, 00:04:30.056 "nvme_io": false 00:04:30.056 }, 00:04:30.056 "memory_domains": [ 00:04:30.056 { 00:04:30.056 "dma_device_id": "system", 00:04:30.056 "dma_device_type": 1 00:04:30.056 }, 00:04:30.056 { 00:04:30.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.056 "dma_device_type": 2 00:04:30.056 } 00:04:30.056 ], 00:04:30.056 "driver_specific": { 00:04:30.056 "passthru": { 00:04:30.056 "name": "Passthru0", 00:04:30.056 "base_bdev_name": "Malloc2" 00:04:30.056 } 00:04:30.056 } 00:04:30.056 } 00:04:30.056 ]' 00:04:30.056 03:55:44 -- rpc/rpc.sh@21 -- # jq length 00:04:30.056 03:55:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.056 03:55:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.056 03:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 03:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.056 03:55:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.056 03:55:44 -- rpc/rpc.sh@26 -- # jq length 00:04:30.056 03:55:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.056 00:04:30.056 real 0m0.289s 00:04:30.056 user 0m0.195s 00:04:30.056 sys 0m0.030s 00:04:30.056 03:55:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.056 03:55:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.056 ************************************ 00:04:30.056 END TEST rpc_daemon_integrity 00:04:30.056 ************************************ 00:04:30.315 03:55:44 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.315 03:55:44 -- rpc/rpc.sh@84 -- # killprocess 3632437 00:04:30.315 03:55:44 -- common/autotest_common.sh@936 -- # '[' -z 3632437 ']' 00:04:30.315 03:55:44 -- common/autotest_common.sh@940 -- # kill -0 3632437 00:04:30.315 03:55:44 -- common/autotest_common.sh@941 -- # uname 00:04:30.315 03:55:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:30.315 03:55:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3632437 00:04:30.315 03:55:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:30.315 03:55:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:30.315 03:55:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3632437' 00:04:30.315 killing process with pid 3632437 00:04:30.316 03:55:44 -- common/autotest_common.sh@955 -- # kill 3632437 00:04:30.316 03:55:44 -- common/autotest_common.sh@960 -- # wait 3632437 00:04:30.574 00:04:30.574 real 0m3.119s 00:04:30.574 user 0m4.122s 00:04:30.574 sys 0m0.910s 00:04:30.574 03:55:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.574 03:55:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.574 ************************************ 00:04:30.574 END TEST rpc 00:04:30.574 ************************************ 00:04:30.574 03:55:45 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.574 03:55:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.574 03:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.574 03:55:45 -- common/autotest_common.sh@10 -- # set +x 00:04:30.832 ************************************ 00:04:30.832 START TEST skip_rpc 00:04:30.832 ************************************ 00:04:30.832 03:55:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:30.832 * Looking for test storage... 00:04:30.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.832 03:55:45 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.832 03:55:45 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:30.832 03:55:45 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.832 03:55:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.832 03:55:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.832 03:55:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.090 ************************************ 00:04:31.090 START TEST skip_rpc 00:04:31.090 ************************************ 00:04:31.090 03:55:45 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:31.090 03:55:45 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3633183 00:04:31.090 03:55:45 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.090 03:55:45 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.090 03:55:45 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.090 [2024-04-19 03:55:45.448700] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:31.090 [2024-04-19 03:55:45.448733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633183 ] 00:04:31.090 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.090 [2024-04-19 03:55:45.516012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.090 [2024-04-19 03:55:45.604828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.389 03:55:50 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:36.389 03:55:50 -- common/autotest_common.sh@638 -- # local es=0 00:04:36.389 03:55:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:36.389 03:55:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:36.389 03:55:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:36.389 03:55:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:36.389 03:55:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:36.389 03:55:50 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:36.389 03:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.389 03:55:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 03:55:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:36.389 03:55:50 -- common/autotest_common.sh@641 -- # es=1 00:04:36.389 03:55:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:36.389 03:55:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:36.389 03:55:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:36.389 03:55:50 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:36.389 03:55:50 -- rpc/skip_rpc.sh@23 -- # killprocess 3633183 00:04:36.389 03:55:50 -- common/autotest_common.sh@936 -- # '[' -z 3633183 ']' 00:04:36.389 03:55:50 -- common/autotest_common.sh@940 -- # kill -0 3633183 00:04:36.389 03:55:50 -- common/autotest_common.sh@941 -- # uname 00:04:36.389 03:55:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.389 03:55:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3633183 00:04:36.389 03:55:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.389 03:55:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.389 03:55:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3633183' 00:04:36.389 killing process with pid 3633183 00:04:36.389 03:55:50 -- common/autotest_common.sh@955 -- # kill 3633183 00:04:36.389 03:55:50 -- common/autotest_common.sh@960 -- # wait 3633183 00:04:36.389 00:04:36.389 real 0m5.426s 00:04:36.389 user 0m5.159s 00:04:36.389 sys 0m0.290s 00:04:36.389 03:55:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.389 03:55:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 ************************************ 00:04:36.389 END TEST skip_rpc 00:04:36.389 ************************************ 00:04:36.389 03:55:50 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:36.389 03:55:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.389 03:55:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.389 03:55:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.649 ************************************ 00:04:36.649 START TEST skip_rpc_with_json 00:04:36.649 ************************************ 00:04:36.649 03:55:51 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:36.649 03:55:51 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:36.649 03:55:51 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3634266 00:04:36.649 03:55:51 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.649 03:55:51 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.649 03:55:51 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3634266 00:04:36.649 03:55:51 -- common/autotest_common.sh@817 -- # '[' -z 3634266 ']' 00:04:36.649 03:55:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.649 03:55:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.649 03:55:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.649 03:55:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.649 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:04:36.649 [2024-04-19 03:55:51.065067] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:36.649 [2024-04-19 03:55:51.065123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634266 ] 00:04:36.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.649 [2024-04-19 03:55:51.147990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.909 [2024-04-19 03:55:51.233386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.170 03:55:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.170 03:55:51 -- common/autotest_common.sh@850 -- # return 0 00:04:37.170 03:55:51 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:37.170 03:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.170 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.170 [2024-04-19 03:55:51.452572] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:37.170 request: 00:04:37.170 { 00:04:37.170 "trtype": "tcp", 00:04:37.170 "method": "nvmf_get_transports", 00:04:37.170 "req_id": 1 00:04:37.170 } 00:04:37.170 Got JSON-RPC error response 00:04:37.170 response: 00:04:37.170 { 00:04:37.170 "code": -19, 00:04:37.170 "message": "No such device" 00:04:37.170 } 00:04:37.170 03:55:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:37.170 03:55:51 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:37.170 03:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.170 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.170 [2024-04-19 03:55:51.464700] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.170 03:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.170 03:55:51 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:37.170 03:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:37.170 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.170 03:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:37.170 03:55:51 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.170 { 00:04:37.170 "subsystems": [ 00:04:37.170 { 00:04:37.170 "subsystem": "vfio_user_target", 00:04:37.170 "config": null 00:04:37.170 }, 00:04:37.170 { 00:04:37.170 "subsystem": "keyring", 00:04:37.170 "config": [] 00:04:37.170 }, 00:04:37.170 { 00:04:37.170 "subsystem": "iobuf", 00:04:37.170 "config": [ 00:04:37.170 { 00:04:37.170 "method": "iobuf_set_options", 00:04:37.170 "params": { 00:04:37.170 "small_pool_count": 8192, 00:04:37.170 "large_pool_count": 1024, 00:04:37.170 "small_bufsize": 8192, 00:04:37.170 "large_bufsize": 135168 00:04:37.170 } 00:04:37.170 } 00:04:37.170 ] 00:04:37.170 }, 00:04:37.170 { 00:04:37.170 "subsystem": "sock", 00:04:37.170 "config": [ 00:04:37.170 { 00:04:37.170 "method": "sock_impl_set_options", 00:04:37.170 "params": { 00:04:37.170 "impl_name": "posix", 00:04:37.170 "recv_buf_size": 2097152, 00:04:37.170 "send_buf_size": 2097152, 00:04:37.170 "enable_recv_pipe": true, 00:04:37.170 "enable_quickack": false, 00:04:37.170 "enable_placement_id": 0, 00:04:37.170 "enable_zerocopy_send_server": true, 00:04:37.170 "enable_zerocopy_send_client": false, 00:04:37.170 "zerocopy_threshold": 0, 00:04:37.170 "tls_version": 0, 00:04:37.170 "enable_ktls": false 00:04:37.170 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "sock_impl_set_options", 00:04:37.171 "params": { 00:04:37.171 "impl_name": "ssl", 00:04:37.171 "recv_buf_size": 4096, 00:04:37.171 "send_buf_size": 4096, 00:04:37.171 "enable_recv_pipe": true, 00:04:37.171 "enable_quickack": false, 00:04:37.171 "enable_placement_id": 0, 00:04:37.171 "enable_zerocopy_send_server": true, 00:04:37.171 "enable_zerocopy_send_client": false, 00:04:37.171 "zerocopy_threshold": 0, 00:04:37.171 "tls_version": 0, 00:04:37.171 "enable_ktls": false 00:04:37.171 } 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "vmd", 00:04:37.171 "config": [] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "accel", 00:04:37.171 "config": [ 00:04:37.171 { 00:04:37.171 "method": "accel_set_options", 00:04:37.171 "params": { 00:04:37.171 "small_cache_size": 128, 00:04:37.171 "large_cache_size": 16, 00:04:37.171 "task_count": 2048, 00:04:37.171 "sequence_count": 2048, 00:04:37.171 "buf_count": 2048 00:04:37.171 } 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "bdev", 00:04:37.171 "config": [ 00:04:37.171 { 00:04:37.171 "method": "bdev_set_options", 00:04:37.171 "params": { 00:04:37.171 "bdev_io_pool_size": 65535, 00:04:37.171 "bdev_io_cache_size": 256, 00:04:37.171 "bdev_auto_examine": true, 00:04:37.171 "iobuf_small_cache_size": 128, 00:04:37.171 "iobuf_large_cache_size": 16 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "bdev_raid_set_options", 00:04:37.171 "params": { 00:04:37.171 "process_window_size_kb": 1024 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "bdev_iscsi_set_options", 00:04:37.171 "params": { 00:04:37.171 "timeout_sec": 30 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "bdev_nvme_set_options", 00:04:37.171 "params": { 00:04:37.171 "action_on_timeout": "none", 00:04:37.171 "timeout_us": 0, 00:04:37.171 "timeout_admin_us": 0, 00:04:37.171 "keep_alive_timeout_ms": 10000, 00:04:37.171 "arbitration_burst": 0, 00:04:37.171 "low_priority_weight": 0, 00:04:37.171 "medium_priority_weight": 0, 00:04:37.171 "high_priority_weight": 0, 00:04:37.171 "nvme_adminq_poll_period_us": 10000, 00:04:37.171 "nvme_ioq_poll_period_us": 0, 00:04:37.171 "io_queue_requests": 0, 00:04:37.171 "delay_cmd_submit": true, 00:04:37.171 "transport_retry_count": 4, 00:04:37.171 "bdev_retry_count": 3, 00:04:37.171 "transport_ack_timeout": 0, 00:04:37.171 "ctrlr_loss_timeout_sec": 0, 00:04:37.171 "reconnect_delay_sec": 0, 00:04:37.171 "fast_io_fail_timeout_sec": 0, 00:04:37.171 "disable_auto_failback": false, 00:04:37.171 "generate_uuids": false, 00:04:37.171 "transport_tos": 0, 00:04:37.171 "nvme_error_stat": false, 00:04:37.171 "rdma_srq_size": 0, 00:04:37.171 "io_path_stat": false, 00:04:37.171 "allow_accel_sequence": false, 00:04:37.171 "rdma_max_cq_size": 0, 00:04:37.171 "rdma_cm_event_timeout_ms": 0, 00:04:37.171 "dhchap_digests": [ 00:04:37.171 "sha256", 00:04:37.171 "sha384", 00:04:37.171 "sha512" 00:04:37.171 ], 00:04:37.171 "dhchap_dhgroups": [ 00:04:37.171 "null", 00:04:37.171 "ffdhe2048", 00:04:37.171 "ffdhe3072", 00:04:37.171 "ffdhe4096", 00:04:37.171 "ffdhe6144", 00:04:37.171 "ffdhe8192" 00:04:37.171 ] 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "bdev_nvme_set_hotplug", 00:04:37.171 "params": { 00:04:37.171 "period_us": 100000, 00:04:37.171 "enable": false 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "bdev_wait_for_examine" 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "scsi", 00:04:37.171 "config": null 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "scheduler", 00:04:37.171 "config": [ 00:04:37.171 { 00:04:37.171 "method": "framework_set_scheduler", 00:04:37.171 "params": { 00:04:37.171 "name": "static" 00:04:37.171 } 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "vhost_scsi", 00:04:37.171 "config": [] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "vhost_blk", 00:04:37.171 "config": [] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "ublk", 00:04:37.171 "config": [] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "nbd", 00:04:37.171 "config": [] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "nvmf", 00:04:37.171 "config": [ 00:04:37.171 { 00:04:37.171 "method": "nvmf_set_config", 00:04:37.171 "params": { 00:04:37.171 "discovery_filter": "match_any", 00:04:37.171 "admin_cmd_passthru": { 00:04:37.171 "identify_ctrlr": false 00:04:37.171 } 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "nvmf_set_max_subsystems", 00:04:37.171 "params": { 00:04:37.171 "max_subsystems": 1024 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "nvmf_set_crdt", 00:04:37.171 "params": { 00:04:37.171 "crdt1": 0, 00:04:37.171 "crdt2": 0, 00:04:37.171 "crdt3": 0 00:04:37.171 } 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "method": "nvmf_create_transport", 00:04:37.171 "params": { 00:04:37.171 "trtype": "TCP", 00:04:37.171 "max_queue_depth": 128, 00:04:37.171 "max_io_qpairs_per_ctrlr": 127, 00:04:37.171 "in_capsule_data_size": 4096, 00:04:37.171 "max_io_size": 131072, 00:04:37.171 "io_unit_size": 131072, 00:04:37.171 "max_aq_depth": 128, 00:04:37.171 "num_shared_buffers": 511, 00:04:37.171 "buf_cache_size": 4294967295, 00:04:37.171 "dif_insert_or_strip": false, 00:04:37.171 "zcopy": false, 00:04:37.171 "c2h_success": true, 00:04:37.171 "sock_priority": 0, 00:04:37.171 "abort_timeout_sec": 1, 00:04:37.171 "ack_timeout": 0 00:04:37.171 } 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 }, 00:04:37.171 { 00:04:37.171 "subsystem": "iscsi", 00:04:37.171 "config": [ 00:04:37.171 { 00:04:37.171 "method": "iscsi_set_options", 00:04:37.171 "params": { 00:04:37.171 "node_base": "iqn.2016-06.io.spdk", 00:04:37.171 "max_sessions": 128, 00:04:37.171 "max_connections_per_session": 2, 00:04:37.171 "max_queue_depth": 64, 00:04:37.171 "default_time2wait": 2, 00:04:37.171 "default_time2retain": 20, 00:04:37.171 "first_burst_length": 8192, 00:04:37.171 "immediate_data": true, 00:04:37.171 "allow_duplicated_isid": false, 00:04:37.171 "error_recovery_level": 0, 00:04:37.171 "nop_timeout": 60, 00:04:37.171 "nop_in_interval": 30, 00:04:37.171 "disable_chap": false, 00:04:37.171 "require_chap": false, 00:04:37.171 "mutual_chap": false, 00:04:37.171 "chap_group": 0, 00:04:37.171 "max_large_datain_per_connection": 64, 00:04:37.171 "max_r2t_per_connection": 4, 00:04:37.171 "pdu_pool_size": 36864, 00:04:37.171 "immediate_data_pool_size": 16384, 00:04:37.171 "data_out_pool_size": 2048 00:04:37.171 } 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 } 00:04:37.171 ] 00:04:37.171 } 00:04:37.171 03:55:51 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:37.171 03:55:51 -- rpc/skip_rpc.sh@40 -- # killprocess 3634266 00:04:37.171 03:55:51 -- common/autotest_common.sh@936 -- # '[' -z 3634266 ']' 00:04:37.171 03:55:51 -- common/autotest_common.sh@940 -- # kill -0 3634266 00:04:37.171 03:55:51 -- common/autotest_common.sh@941 -- # uname 00:04:37.171 03:55:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.171 03:55:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3634266 00:04:37.171 03:55:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.171 03:55:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.171 03:55:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3634266' 00:04:37.171 killing process with pid 3634266 00:04:37.171 03:55:51 -- common/autotest_common.sh@955 -- # kill 3634266 00:04:37.172 03:55:51 -- common/autotest_common.sh@960 -- # wait 3634266 00:04:37.741 03:55:52 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3634533 00:04:37.741 03:55:52 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.741 03:55:52 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.079 03:55:57 -- rpc/skip_rpc.sh@50 -- # killprocess 3634533 00:04:43.079 03:55:57 -- common/autotest_common.sh@936 -- # '[' -z 3634533 ']' 00:04:43.079 03:55:57 -- common/autotest_common.sh@940 -- # kill -0 3634533 00:04:43.079 03:55:57 -- common/autotest_common.sh@941 -- # uname 00:04:43.079 03:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.079 03:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3634533 00:04:43.079 03:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.079 03:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.079 03:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3634533' 00:04:43.079 killing process with pid 3634533 00:04:43.079 03:55:57 -- common/autotest_common.sh@955 -- # kill 3634533 00:04:43.079 03:55:57 -- common/autotest_common.sh@960 -- # wait 3634533 00:04:43.079 03:55:57 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.079 03:55:57 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.079 00:04:43.079 real 0m6.450s 00:04:43.079 user 0m6.158s 00:04:43.079 sys 0m0.620s 00:04:43.080 03:55:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.080 03:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.080 ************************************ 00:04:43.080 END TEST skip_rpc_with_json 00:04:43.080 ************************************ 00:04:43.080 03:55:57 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.080 03:55:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.080 03:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.080 03:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.339 ************************************ 00:04:43.339 START TEST skip_rpc_with_delay 00:04:43.339 ************************************ 00:04:43.339 03:55:57 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:43.339 03:55:57 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.339 03:55:57 -- common/autotest_common.sh@638 -- # local es=0 00:04:43.339 03:55:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.339 03:55:57 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.339 03:55:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.339 03:55:57 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.339 03:55:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.339 03:55:57 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.339 03:55:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.339 03:55:57 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.339 03:55:57 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.339 03:55:57 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.339 [2024-04-19 03:55:57.680399] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:43.339 [2024-04-19 03:55:57.680487] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:43.339 03:55:57 -- common/autotest_common.sh@641 -- # es=1 00:04:43.339 03:55:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:43.339 03:55:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:43.339 03:55:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:43.339 00:04:43.339 real 0m0.074s 00:04:43.339 user 0m0.052s 00:04:43.339 sys 0m0.021s 00:04:43.339 03:55:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.339 03:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.339 ************************************ 00:04:43.339 END TEST skip_rpc_with_delay 00:04:43.339 ************************************ 00:04:43.339 03:55:57 -- rpc/skip_rpc.sh@77 -- # uname 00:04:43.339 03:55:57 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:43.339 03:55:57 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:43.339 03:55:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.339 03:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.339 03:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.599 ************************************ 00:04:43.599 START TEST exit_on_failed_rpc_init 00:04:43.599 ************************************ 00:04:43.599 03:55:57 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:43.599 03:55:57 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3635648 00:04:43.599 03:55:57 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3635648 00:04:43.599 03:55:57 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.599 03:55:57 -- common/autotest_common.sh@817 -- # '[' -z 3635648 ']' 00:04:43.599 03:55:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.599 03:55:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.599 03:55:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.599 03:55:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.599 03:55:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.599 [2024-04-19 03:55:57.928955] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:43.599 [2024-04-19 03:55:57.929015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635648 ] 00:04:43.599 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.599 [2024-04-19 03:55:58.010123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.599 [2024-04-19 03:55:58.100213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.859 03:55:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.859 03:55:58 -- common/autotest_common.sh@850 -- # return 0 00:04:43.859 03:55:58 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.859 03:55:58 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.859 03:55:58 -- common/autotest_common.sh@638 -- # local es=0 00:04:43.859 03:55:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.859 03:55:58 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.859 03:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.859 03:55:58 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.859 03:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.859 03:55:58 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.859 03:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:43.859 03:55:58 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.859 03:55:58 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.859 03:55:58 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.859 [2024-04-19 03:55:58.370514] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:43.859 [2024-04-19 03:55:58.370575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635663 ] 00:04:44.118 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.118 [2024-04-19 03:55:58.444204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.118 [2024-04-19 03:55:58.529982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.118 [2024-04-19 03:55:58.530061] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.119 [2024-04-19 03:55:58.530073] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.119 [2024-04-19 03:55:58.530082] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.119 03:55:58 -- common/autotest_common.sh@641 -- # es=234 00:04:44.119 03:55:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:44.119 03:55:58 -- common/autotest_common.sh@650 -- # es=106 00:04:44.119 03:55:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:44.119 03:55:58 -- common/autotest_common.sh@658 -- # es=1 00:04:44.119 03:55:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:44.119 03:55:58 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.119 03:55:58 -- rpc/skip_rpc.sh@70 -- # killprocess 3635648 00:04:44.119 03:55:58 -- common/autotest_common.sh@936 -- # '[' -z 3635648 ']' 00:04:44.119 03:55:58 -- common/autotest_common.sh@940 -- # kill -0 3635648 00:04:44.119 03:55:58 -- common/autotest_common.sh@941 -- # uname 00:04:44.119 03:55:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.119 03:55:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3635648 00:04:44.378 03:55:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.378 03:55:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.378 03:55:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3635648' 00:04:44.378 killing process with pid 3635648 00:04:44.378 03:55:58 -- common/autotest_common.sh@955 -- # kill 3635648 00:04:44.378 03:55:58 -- common/autotest_common.sh@960 -- # wait 3635648 00:04:44.637 00:04:44.637 real 0m1.168s 00:04:44.637 user 0m1.336s 00:04:44.637 sys 0m0.428s 00:04:44.637 03:55:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.637 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.638 ************************************ 00:04:44.638 END TEST exit_on_failed_rpc_init 00:04:44.638 ************************************ 00:04:44.638 03:55:59 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.638 00:04:44.638 real 0m13.914s 00:04:44.638 user 0m13.009s 00:04:44.638 sys 0m1.809s 00:04:44.638 03:55:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.638 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.638 ************************************ 00:04:44.638 END TEST skip_rpc 00:04:44.638 ************************************ 00:04:44.638 03:55:59 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.638 03:55:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.638 03:55:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.638 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 START TEST rpc_client 00:04:44.897 ************************************ 00:04:44.897 03:55:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:44.897 * Looking for test storage... 00:04:44.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:44.897 03:55:59 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:44.897 OK 00:04:44.897 03:55:59 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.897 00:04:44.897 real 0m0.117s 00:04:44.897 user 0m0.050s 00:04:44.897 sys 0m0.075s 00:04:44.897 03:55:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.897 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 END TEST rpc_client 00:04:44.897 ************************************ 00:04:44.897 03:55:59 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:44.897 03:55:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.897 03:55:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.897 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.156 ************************************ 00:04:45.156 START TEST json_config 00:04:45.156 ************************************ 00:04:45.156 03:55:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.156 03:55:59 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.156 03:55:59 -- nvmf/common.sh@7 -- # uname -s 00:04:45.156 03:55:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.156 03:55:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.156 03:55:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.156 03:55:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.156 03:55:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.156 03:55:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.156 03:55:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.156 03:55:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.156 03:55:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.156 03:55:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.156 03:55:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:04:45.156 03:55:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:04:45.156 03:55:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.156 03:55:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.156 03:55:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.156 03:55:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.156 03:55:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.156 03:55:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.156 03:55:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.156 03:55:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.156 03:55:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.156 03:55:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.156 03:55:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.156 03:55:59 -- paths/export.sh@5 -- # export PATH 00:04:45.156 03:55:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.156 03:55:59 -- nvmf/common.sh@47 -- # : 0 00:04:45.156 03:55:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:45.156 03:55:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:45.156 03:55:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.156 03:55:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.156 03:55:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.156 03:55:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:45.156 03:55:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:45.156 03:55:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:45.156 03:55:59 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.156 03:55:59 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.156 03:55:59 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.156 03:55:59 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.156 03:55:59 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.156 03:55:59 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:45.156 03:55:59 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:45.156 03:55:59 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:45.156 03:55:59 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:45.156 03:55:59 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:45.156 03:55:59 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:45.156 03:55:59 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:45.156 03:55:59 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:45.156 03:55:59 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:45.156 03:55:59 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.156 03:55:59 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:45.156 INFO: JSON configuration test init 00:04:45.156 03:55:59 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:45.156 03:55:59 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:45.156 03:55:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:45.156 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.156 03:55:59 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:45.156 03:55:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:45.156 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.156 03:55:59 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:45.156 03:55:59 -- json_config/common.sh@9 -- # local app=target 00:04:45.156 03:55:59 -- json_config/common.sh@10 -- # shift 00:04:45.156 03:55:59 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.156 03:55:59 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.156 03:55:59 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.156 03:55:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.156 03:55:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.156 03:55:59 -- json_config/common.sh@22 -- # app_pid["$app"]=3636045 00:04:45.156 03:55:59 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.156 Waiting for target to run... 00:04:45.156 03:55:59 -- json_config/common.sh@25 -- # waitforlisten 3636045 /var/tmp/spdk_tgt.sock 00:04:45.156 03:55:59 -- common/autotest_common.sh@817 -- # '[' -z 3636045 ']' 00:04:45.156 03:55:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.156 03:55:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.156 03:55:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.156 03:55:59 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:45.157 03:55:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.157 03:55:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.157 [2024-04-19 03:55:59.682835] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:45.415 [2024-04-19 03:55:59.682896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636045 ] 00:04:45.415 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.674 [2024-04-19 03:55:59.983975] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.674 [2024-04-19 03:56:00.066984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.241 03:56:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.241 03:56:00 -- common/autotest_common.sh@850 -- # return 0 00:04:46.241 03:56:00 -- json_config/common.sh@26 -- # echo '' 00:04:46.241 00:04:46.241 03:56:00 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:46.241 03:56:00 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:46.241 03:56:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:46.241 03:56:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.241 03:56:00 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:46.241 03:56:00 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:46.241 03:56:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:46.241 03:56:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.241 03:56:00 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.241 03:56:00 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:46.241 03:56:00 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:49.539 03:56:03 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:49.539 03:56:03 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:49.539 03:56:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:49.539 03:56:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.539 03:56:03 -- json_config/json_config.sh@45 -- # local ret=0 00:04:49.539 03:56:03 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:49.539 03:56:03 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:49.539 03:56:03 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:49.539 03:56:03 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:49.539 03:56:03 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:49.539 03:56:04 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:49.539 03:56:04 -- json_config/json_config.sh@48 -- # local get_types 00:04:49.539 03:56:04 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:49.539 03:56:04 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:49.539 03:56:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.539 03:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.797 03:56:04 -- json_config/json_config.sh@55 -- # return 0 00:04:49.797 03:56:04 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:49.797 03:56:04 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:49.797 03:56:04 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:49.797 03:56:04 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:49.797 03:56:04 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:49.797 03:56:04 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:49.797 03:56:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:49.797 03:56:04 -- common/autotest_common.sh@10 -- # set +x 00:04:49.798 03:56:04 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:49.798 03:56:04 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:49.798 03:56:04 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:49.798 03:56:04 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:49.798 03:56:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:50.056 MallocForNvmf0 00:04:50.056 03:56:04 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.056 03:56:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:50.056 MallocForNvmf1 00:04:50.315 03:56:04 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.315 03:56:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:50.315 [2024-04-19 03:56:04.811341] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.315 03:56:04 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:50.315 03:56:04 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:50.573 03:56:05 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:50.573 03:56:05 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:50.831 03:56:05 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:50.831 03:56:05 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:51.089 03:56:05 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:51.089 03:56:05 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:51.348 [2024-04-19 03:56:05.746352] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.348 03:56:05 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:51.348 03:56:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:51.348 03:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.348 03:56:05 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:51.348 03:56:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:51.348 03:56:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.348 03:56:05 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:51.348 03:56:05 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:51.348 03:56:05 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:51.607 MallocBdevForConfigChangeCheck 00:04:51.607 03:56:06 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:51.607 03:56:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:51.607 03:56:06 -- common/autotest_common.sh@10 -- # set +x 00:04:51.607 03:56:06 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:51.607 03:56:06 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.175 03:56:06 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:52.175 INFO: shutting down applications... 00:04:52.175 03:56:06 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:52.175 03:56:06 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:52.175 03:56:06 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:52.175 03:56:06 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:54.080 Calling clear_iscsi_subsystem 00:04:54.080 Calling clear_nvmf_subsystem 00:04:54.080 Calling clear_nbd_subsystem 00:04:54.080 Calling clear_ublk_subsystem 00:04:54.080 Calling clear_vhost_blk_subsystem 00:04:54.080 Calling clear_vhost_scsi_subsystem 00:04:54.080 Calling clear_bdev_subsystem 00:04:54.081 03:56:08 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:54.081 03:56:08 -- json_config/json_config.sh@343 -- # count=100 00:04:54.081 03:56:08 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:54.081 03:56:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.081 03:56:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:54.081 03:56:08 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:54.081 03:56:08 -- json_config/json_config.sh@345 -- # break 00:04:54.081 03:56:08 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:54.081 03:56:08 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:54.081 03:56:08 -- json_config/common.sh@31 -- # local app=target 00:04:54.081 03:56:08 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.081 03:56:08 -- json_config/common.sh@35 -- # [[ -n 3636045 ]] 00:04:54.081 03:56:08 -- json_config/common.sh@38 -- # kill -SIGINT 3636045 00:04:54.081 03:56:08 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.081 03:56:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.081 03:56:08 -- json_config/common.sh@41 -- # kill -0 3636045 00:04:54.081 03:56:08 -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.650 03:56:09 -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.650 03:56:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.650 03:56:09 -- json_config/common.sh@41 -- # kill -0 3636045 00:04:54.650 03:56:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:54.650 03:56:09 -- json_config/common.sh@43 -- # break 00:04:54.650 03:56:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:54.650 03:56:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:54.650 SPDK target shutdown done 00:04:54.650 03:56:09 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:54.650 INFO: relaunching applications... 00:04:54.650 03:56:09 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.650 03:56:09 -- json_config/common.sh@9 -- # local app=target 00:04:54.650 03:56:09 -- json_config/common.sh@10 -- # shift 00:04:54.650 03:56:09 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:54.650 03:56:09 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:54.650 03:56:09 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:54.650 03:56:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.650 03:56:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:54.650 03:56:09 -- json_config/common.sh@22 -- # app_pid["$app"]=3637996 00:04:54.650 03:56:09 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:54.650 Waiting for target to run... 00:04:54.650 03:56:09 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:54.650 03:56:09 -- json_config/common.sh@25 -- # waitforlisten 3637996 /var/tmp/spdk_tgt.sock 00:04:54.650 03:56:09 -- common/autotest_common.sh@817 -- # '[' -z 3637996 ']' 00:04:54.650 03:56:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:54.650 03:56:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:54.650 03:56:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:54.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:54.650 03:56:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:54.650 03:56:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.650 [2024-04-19 03:56:09.107503] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:04:54.650 [2024-04-19 03:56:09.107572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637996 ] 00:04:54.650 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.218 [2024-04-19 03:56:09.558901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.218 [2024-04-19 03:56:09.665732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.506 [2024-04-19 03:56:12.696720] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.506 [2024-04-19 03:56:12.729109] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.506 03:56:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.506 03:56:12 -- common/autotest_common.sh@850 -- # return 0 00:04:58.506 03:56:12 -- json_config/common.sh@26 -- # echo '' 00:04:58.506 00:04:58.506 03:56:12 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:58.506 03:56:12 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:58.506 INFO: Checking if target configuration is the same... 00:04:58.506 03:56:12 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.506 03:56:12 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:58.506 03:56:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.506 + '[' 2 -ne 2 ']' 00:04:58.506 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.506 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.506 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.506 +++ basename /dev/fd/62 00:04:58.506 ++ mktemp /tmp/62.XXX 00:04:58.506 + tmp_file_1=/tmp/62.kxt 00:04:58.506 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.506 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.506 + tmp_file_2=/tmp/spdk_tgt_config.json.r5Y 00:04:58.506 + ret=0 00:04:58.506 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.765 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.765 + diff -u /tmp/62.kxt /tmp/spdk_tgt_config.json.r5Y 00:04:58.765 + echo 'INFO: JSON config files are the same' 00:04:58.765 INFO: JSON config files are the same 00:04:58.765 + rm /tmp/62.kxt /tmp/spdk_tgt_config.json.r5Y 00:04:58.765 + exit 0 00:04:58.765 03:56:13 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:58.765 03:56:13 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.765 INFO: changing configuration and checking if this can be detected... 00:04:58.765 03:56:13 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.765 03:56:13 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:59.024 03:56:13 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.024 03:56:13 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:59.024 03:56:13 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.024 + '[' 2 -ne 2 ']' 00:04:59.024 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:59.024 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:59.024 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:59.024 +++ basename /dev/fd/62 00:04:59.024 ++ mktemp /tmp/62.XXX 00:04:59.024 + tmp_file_1=/tmp/62.IhV 00:04:59.024 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.024 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:59.024 + tmp_file_2=/tmp/spdk_tgt_config.json.igt 00:04:59.024 + ret=0 00:04:59.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.283 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:59.542 + diff -u /tmp/62.IhV /tmp/spdk_tgt_config.json.igt 00:04:59.542 + ret=1 00:04:59.542 + echo '=== Start of file: /tmp/62.IhV ===' 00:04:59.542 + cat /tmp/62.IhV 00:04:59.542 + echo '=== End of file: /tmp/62.IhV ===' 00:04:59.542 + echo '' 00:04:59.542 + echo '=== Start of file: /tmp/spdk_tgt_config.json.igt ===' 00:04:59.542 + cat /tmp/spdk_tgt_config.json.igt 00:04:59.542 + echo '=== End of file: /tmp/spdk_tgt_config.json.igt ===' 00:04:59.542 + echo '' 00:04:59.542 + rm /tmp/62.IhV /tmp/spdk_tgt_config.json.igt 00:04:59.542 + exit 1 00:04:59.542 03:56:13 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:59.542 INFO: configuration change detected. 00:04:59.542 03:56:13 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:59.542 03:56:13 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:59.542 03:56:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:59.542 03:56:13 -- common/autotest_common.sh@10 -- # set +x 00:04:59.542 03:56:13 -- json_config/json_config.sh@307 -- # local ret=0 00:04:59.542 03:56:13 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:59.542 03:56:13 -- json_config/json_config.sh@317 -- # [[ -n 3637996 ]] 00:04:59.542 03:56:13 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:59.542 03:56:13 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:59.542 03:56:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:59.542 03:56:13 -- common/autotest_common.sh@10 -- # set +x 00:04:59.542 03:56:13 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:59.542 03:56:13 -- json_config/json_config.sh@193 -- # uname -s 00:04:59.542 03:56:13 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:59.542 03:56:13 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:59.542 03:56:13 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:59.542 03:56:13 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:59.542 03:56:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:59.542 03:56:13 -- common/autotest_common.sh@10 -- # set +x 00:04:59.542 03:56:13 -- json_config/json_config.sh@323 -- # killprocess 3637996 00:04:59.542 03:56:13 -- common/autotest_common.sh@936 -- # '[' -z 3637996 ']' 00:04:59.542 03:56:13 -- common/autotest_common.sh@940 -- # kill -0 3637996 00:04:59.542 03:56:13 -- common/autotest_common.sh@941 -- # uname 00:04:59.542 03:56:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.542 03:56:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3637996 00:04:59.542 03:56:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.542 03:56:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.542 03:56:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3637996' 00:04:59.542 killing process with pid 3637996 00:04:59.542 03:56:13 -- common/autotest_common.sh@955 -- # kill 3637996 00:04:59.542 03:56:13 -- common/autotest_common.sh@960 -- # wait 3637996 00:05:01.458 03:56:15 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.458 03:56:15 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:01.458 03:56:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:01.458 03:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.458 03:56:15 -- json_config/json_config.sh@328 -- # return 0 00:05:01.458 03:56:15 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:01.458 INFO: Success 00:05:01.458 00:05:01.458 real 0m16.048s 00:05:01.458 user 0m17.906s 00:05:01.458 sys 0m2.022s 00:05:01.458 03:56:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.458 03:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.458 ************************************ 00:05:01.458 END TEST json_config 00:05:01.458 ************************************ 00:05:01.458 03:56:15 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.458 03:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.458 03:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.458 03:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.459 ************************************ 00:05:01.459 START TEST json_config_extra_key 00:05:01.459 ************************************ 00:05:01.459 03:56:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.459 03:56:15 -- nvmf/common.sh@7 -- # uname -s 00:05:01.459 03:56:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.459 03:56:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.459 03:56:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.459 03:56:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.459 03:56:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.459 03:56:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.459 03:56:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.459 03:56:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.459 03:56:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.459 03:56:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.459 03:56:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:01.459 03:56:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:01.459 03:56:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.459 03:56:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.459 03:56:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.459 03:56:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.459 03:56:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.459 03:56:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.459 03:56:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.459 03:56:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.459 03:56:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.459 03:56:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.459 03:56:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.459 03:56:15 -- paths/export.sh@5 -- # export PATH 00:05:01.459 03:56:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.459 03:56:15 -- nvmf/common.sh@47 -- # : 0 00:05:01.459 03:56:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:01.459 03:56:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:01.459 03:56:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.459 03:56:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.459 03:56:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.459 03:56:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:01.459 03:56:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:01.459 03:56:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.459 INFO: launching applications... 00:05:01.459 03:56:15 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.459 03:56:15 -- json_config/common.sh@9 -- # local app=target 00:05:01.459 03:56:15 -- json_config/common.sh@10 -- # shift 00:05:01.459 03:56:15 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.459 03:56:15 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.459 03:56:15 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.459 03:56:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.459 03:56:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.459 03:56:15 -- json_config/common.sh@22 -- # app_pid["$app"]=3639307 00:05:01.459 03:56:15 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.459 Waiting for target to run... 00:05:01.459 03:56:15 -- json_config/common.sh@25 -- # waitforlisten 3639307 /var/tmp/spdk_tgt.sock 00:05:01.459 03:56:15 -- common/autotest_common.sh@817 -- # '[' -z 3639307 ']' 00:05:01.459 03:56:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.459 03:56:15 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.459 03:56:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.459 03:56:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.459 03:56:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.459 03:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:01.459 [2024-04-19 03:56:15.920518] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:01.459 [2024-04-19 03:56:15.920587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639307 ] 00:05:01.459 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.718 [2024-04-19 03:56:16.223609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.977 [2024-04-19 03:56:16.303016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.545 03:56:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.545 03:56:16 -- common/autotest_common.sh@850 -- # return 0 00:05:02.545 03:56:16 -- json_config/common.sh@26 -- # echo '' 00:05:02.545 00:05:02.545 03:56:16 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.545 INFO: shutting down applications... 00:05:02.545 03:56:16 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.545 03:56:16 -- json_config/common.sh@31 -- # local app=target 00:05:02.545 03:56:16 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.545 03:56:16 -- json_config/common.sh@35 -- # [[ -n 3639307 ]] 00:05:02.545 03:56:16 -- json_config/common.sh@38 -- # kill -SIGINT 3639307 00:05:02.545 03:56:16 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.545 03:56:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.545 03:56:16 -- json_config/common.sh@41 -- # kill -0 3639307 00:05:02.545 03:56:16 -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.113 03:56:17 -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.113 03:56:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.113 03:56:17 -- json_config/common.sh@41 -- # kill -0 3639307 00:05:03.113 03:56:17 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.113 03:56:17 -- json_config/common.sh@43 -- # break 00:05:03.113 03:56:17 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.113 03:56:17 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.113 SPDK target shutdown done 00:05:03.113 03:56:17 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.113 Success 00:05:03.113 00:05:03.113 real 0m1.597s 00:05:03.113 user 0m1.375s 00:05:03.113 sys 0m0.404s 00:05:03.113 03:56:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.113 03:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.113 ************************************ 00:05:03.113 END TEST json_config_extra_key 00:05:03.113 ************************************ 00:05:03.113 03:56:17 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.113 03:56:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.113 03:56:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.113 03:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.113 ************************************ 00:05:03.113 START TEST alias_rpc 00:05:03.113 ************************************ 00:05:03.113 03:56:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.113 * Looking for test storage... 00:05:03.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:03.113 03:56:17 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.113 03:56:17 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3639751 00:05:03.113 03:56:17 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3639751 00:05:03.113 03:56:17 -- common/autotest_common.sh@817 -- # '[' -z 3639751 ']' 00:05:03.113 03:56:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.113 03:56:17 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.113 03:56:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.113 03:56:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.113 03:56:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.113 03:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.372 [2024-04-19 03:56:17.690477] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:03.372 [2024-04-19 03:56:17.690535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639751 ] 00:05:03.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.372 [2024-04-19 03:56:17.772809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.372 [2024-04-19 03:56:17.863375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.307 03:56:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:04.307 03:56:18 -- common/autotest_common.sh@850 -- # return 0 00:05:04.307 03:56:18 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:04.566 03:56:18 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3639751 00:05:04.566 03:56:18 -- common/autotest_common.sh@936 -- # '[' -z 3639751 ']' 00:05:04.566 03:56:18 -- common/autotest_common.sh@940 -- # kill -0 3639751 00:05:04.566 03:56:18 -- common/autotest_common.sh@941 -- # uname 00:05:04.566 03:56:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.566 03:56:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3639751 00:05:04.566 03:56:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.566 03:56:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.566 03:56:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3639751' 00:05:04.566 killing process with pid 3639751 00:05:04.566 03:56:18 -- common/autotest_common.sh@955 -- # kill 3639751 00:05:04.566 03:56:18 -- common/autotest_common.sh@960 -- # wait 3639751 00:05:04.824 00:05:04.824 real 0m1.754s 00:05:04.824 user 0m2.047s 00:05:04.824 sys 0m0.454s 00:05:04.824 03:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.824 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.824 ************************************ 00:05:04.824 END TEST alias_rpc 00:05:04.824 ************************************ 00:05:04.824 03:56:19 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:04.824 03:56:19 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:04.824 03:56:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.824 03:56:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.824 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.082 ************************************ 00:05:05.082 START TEST spdkcli_tcp 00:05:05.082 ************************************ 00:05:05.082 03:56:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:05.082 * Looking for test storage... 00:05:05.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:05.082 03:56:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:05.082 03:56:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.082 03:56:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:05.082 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3640090 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@27 -- # waitforlisten 3640090 00:05:05.082 03:56:19 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.082 03:56:19 -- common/autotest_common.sh@817 -- # '[' -z 3640090 ']' 00:05:05.082 03:56:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.082 03:56:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.082 03:56:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.082 03:56:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.082 03:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:05.342 [2024-04-19 03:56:19.611352] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:05.342 [2024-04-19 03:56:19.611411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640090 ] 00:05:05.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.342 [2024-04-19 03:56:19.691497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.342 [2024-04-19 03:56:19.780304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.342 [2024-04-19 03:56:19.780310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.941 03:56:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.941 03:56:20 -- common/autotest_common.sh@850 -- # return 0 00:05:05.941 03:56:20 -- spdkcli/tcp.sh@31 -- # socat_pid=3640349 00:05:05.941 03:56:20 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.941 03:56:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.201 [ 00:05:06.201 "bdev_malloc_delete", 00:05:06.201 "bdev_malloc_create", 00:05:06.201 "bdev_null_resize", 00:05:06.201 "bdev_null_delete", 00:05:06.201 "bdev_null_create", 00:05:06.201 "bdev_nvme_cuse_unregister", 00:05:06.201 "bdev_nvme_cuse_register", 00:05:06.201 "bdev_opal_new_user", 00:05:06.201 "bdev_opal_set_lock_state", 00:05:06.201 "bdev_opal_delete", 00:05:06.201 "bdev_opal_get_info", 00:05:06.201 "bdev_opal_create", 00:05:06.201 "bdev_nvme_opal_revert", 00:05:06.201 "bdev_nvme_opal_init", 00:05:06.201 "bdev_nvme_send_cmd", 00:05:06.201 "bdev_nvme_get_path_iostat", 00:05:06.201 "bdev_nvme_get_mdns_discovery_info", 00:05:06.201 "bdev_nvme_stop_mdns_discovery", 00:05:06.201 "bdev_nvme_start_mdns_discovery", 00:05:06.201 "bdev_nvme_set_multipath_policy", 00:05:06.201 "bdev_nvme_set_preferred_path", 00:05:06.201 "bdev_nvme_get_io_paths", 00:05:06.201 "bdev_nvme_remove_error_injection", 00:05:06.201 "bdev_nvme_add_error_injection", 00:05:06.201 "bdev_nvme_get_discovery_info", 00:05:06.201 "bdev_nvme_stop_discovery", 00:05:06.201 "bdev_nvme_start_discovery", 00:05:06.201 "bdev_nvme_get_controller_health_info", 00:05:06.201 "bdev_nvme_disable_controller", 00:05:06.201 "bdev_nvme_enable_controller", 00:05:06.201 "bdev_nvme_reset_controller", 00:05:06.201 "bdev_nvme_get_transport_statistics", 00:05:06.201 "bdev_nvme_apply_firmware", 00:05:06.201 "bdev_nvme_detach_controller", 00:05:06.201 "bdev_nvme_get_controllers", 00:05:06.201 "bdev_nvme_attach_controller", 00:05:06.201 "bdev_nvme_set_hotplug", 00:05:06.201 "bdev_nvme_set_options", 00:05:06.201 "bdev_passthru_delete", 00:05:06.201 "bdev_passthru_create", 00:05:06.201 "bdev_lvol_grow_lvstore", 00:05:06.201 "bdev_lvol_get_lvols", 00:05:06.201 "bdev_lvol_get_lvstores", 00:05:06.201 "bdev_lvol_delete", 00:05:06.201 "bdev_lvol_set_read_only", 00:05:06.201 "bdev_lvol_resize", 00:05:06.201 "bdev_lvol_decouple_parent", 00:05:06.201 "bdev_lvol_inflate", 00:05:06.201 "bdev_lvol_rename", 00:05:06.201 "bdev_lvol_clone_bdev", 00:05:06.201 "bdev_lvol_clone", 00:05:06.201 "bdev_lvol_snapshot", 00:05:06.201 "bdev_lvol_create", 00:05:06.201 "bdev_lvol_delete_lvstore", 00:05:06.201 "bdev_lvol_rename_lvstore", 00:05:06.201 "bdev_lvol_create_lvstore", 00:05:06.201 "bdev_raid_set_options", 00:05:06.201 "bdev_raid_remove_base_bdev", 00:05:06.201 "bdev_raid_add_base_bdev", 00:05:06.201 "bdev_raid_delete", 00:05:06.201 "bdev_raid_create", 00:05:06.201 "bdev_raid_get_bdevs", 00:05:06.201 "bdev_error_inject_error", 00:05:06.201 "bdev_error_delete", 00:05:06.201 "bdev_error_create", 00:05:06.201 "bdev_split_delete", 00:05:06.201 "bdev_split_create", 00:05:06.201 "bdev_delay_delete", 00:05:06.201 "bdev_delay_create", 00:05:06.201 "bdev_delay_update_latency", 00:05:06.201 "bdev_zone_block_delete", 00:05:06.201 "bdev_zone_block_create", 00:05:06.202 "blobfs_create", 00:05:06.202 "blobfs_detect", 00:05:06.202 "blobfs_set_cache_size", 00:05:06.202 "bdev_aio_delete", 00:05:06.202 "bdev_aio_rescan", 00:05:06.202 "bdev_aio_create", 00:05:06.202 "bdev_ftl_set_property", 00:05:06.202 "bdev_ftl_get_properties", 00:05:06.202 "bdev_ftl_get_stats", 00:05:06.202 "bdev_ftl_unmap", 00:05:06.202 "bdev_ftl_unload", 00:05:06.202 "bdev_ftl_delete", 00:05:06.202 "bdev_ftl_load", 00:05:06.202 "bdev_ftl_create", 00:05:06.202 "bdev_virtio_attach_controller", 00:05:06.202 "bdev_virtio_scsi_get_devices", 00:05:06.202 "bdev_virtio_detach_controller", 00:05:06.202 "bdev_virtio_blk_set_hotplug", 00:05:06.202 "bdev_iscsi_delete", 00:05:06.202 "bdev_iscsi_create", 00:05:06.202 "bdev_iscsi_set_options", 00:05:06.202 "accel_error_inject_error", 00:05:06.202 "ioat_scan_accel_module", 00:05:06.202 "dsa_scan_accel_module", 00:05:06.202 "iaa_scan_accel_module", 00:05:06.202 "vfu_virtio_create_scsi_endpoint", 00:05:06.202 "vfu_virtio_scsi_remove_target", 00:05:06.202 "vfu_virtio_scsi_add_target", 00:05:06.202 "vfu_virtio_create_blk_endpoint", 00:05:06.202 "vfu_virtio_delete_endpoint", 00:05:06.202 "keyring_file_remove_key", 00:05:06.202 "keyring_file_add_key", 00:05:06.202 "iscsi_set_options", 00:05:06.202 "iscsi_get_auth_groups", 00:05:06.202 "iscsi_auth_group_remove_secret", 00:05:06.202 "iscsi_auth_group_add_secret", 00:05:06.202 "iscsi_delete_auth_group", 00:05:06.202 "iscsi_create_auth_group", 00:05:06.202 "iscsi_set_discovery_auth", 00:05:06.202 "iscsi_get_options", 00:05:06.202 "iscsi_target_node_request_logout", 00:05:06.202 "iscsi_target_node_set_redirect", 00:05:06.202 "iscsi_target_node_set_auth", 00:05:06.202 "iscsi_target_node_add_lun", 00:05:06.202 "iscsi_get_stats", 00:05:06.202 "iscsi_get_connections", 00:05:06.202 "iscsi_portal_group_set_auth", 00:05:06.202 "iscsi_start_portal_group", 00:05:06.202 "iscsi_delete_portal_group", 00:05:06.202 "iscsi_create_portal_group", 00:05:06.202 "iscsi_get_portal_groups", 00:05:06.202 "iscsi_delete_target_node", 00:05:06.202 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.202 "iscsi_target_node_add_pg_ig_maps", 00:05:06.202 "iscsi_create_target_node", 00:05:06.202 "iscsi_get_target_nodes", 00:05:06.202 "iscsi_delete_initiator_group", 00:05:06.202 "iscsi_initiator_group_remove_initiators", 00:05:06.202 "iscsi_initiator_group_add_initiators", 00:05:06.202 "iscsi_create_initiator_group", 00:05:06.202 "iscsi_get_initiator_groups", 00:05:06.202 "nvmf_set_crdt", 00:05:06.202 "nvmf_set_config", 00:05:06.202 "nvmf_set_max_subsystems", 00:05:06.202 "nvmf_subsystem_get_listeners", 00:05:06.202 "nvmf_subsystem_get_qpairs", 00:05:06.202 "nvmf_subsystem_get_controllers", 00:05:06.202 "nvmf_get_stats", 00:05:06.202 "nvmf_get_transports", 00:05:06.202 "nvmf_create_transport", 00:05:06.202 "nvmf_get_targets", 00:05:06.202 "nvmf_delete_target", 00:05:06.202 "nvmf_create_target", 00:05:06.202 "nvmf_subsystem_allow_any_host", 00:05:06.202 "nvmf_subsystem_remove_host", 00:05:06.202 "nvmf_subsystem_add_host", 00:05:06.202 "nvmf_ns_remove_host", 00:05:06.202 "nvmf_ns_add_host", 00:05:06.202 "nvmf_subsystem_remove_ns", 00:05:06.202 "nvmf_subsystem_add_ns", 00:05:06.202 "nvmf_subsystem_listener_set_ana_state", 00:05:06.202 "nvmf_discovery_get_referrals", 00:05:06.202 "nvmf_discovery_remove_referral", 00:05:06.202 "nvmf_discovery_add_referral", 00:05:06.202 "nvmf_subsystem_remove_listener", 00:05:06.202 "nvmf_subsystem_add_listener", 00:05:06.202 "nvmf_delete_subsystem", 00:05:06.202 "nvmf_create_subsystem", 00:05:06.202 "nvmf_get_subsystems", 00:05:06.202 "env_dpdk_get_mem_stats", 00:05:06.202 "nbd_get_disks", 00:05:06.202 "nbd_stop_disk", 00:05:06.202 "nbd_start_disk", 00:05:06.202 "ublk_recover_disk", 00:05:06.202 "ublk_get_disks", 00:05:06.202 "ublk_stop_disk", 00:05:06.202 "ublk_start_disk", 00:05:06.202 "ublk_destroy_target", 00:05:06.202 "ublk_create_target", 00:05:06.202 "virtio_blk_create_transport", 00:05:06.202 "virtio_blk_get_transports", 00:05:06.202 "vhost_controller_set_coalescing", 00:05:06.202 "vhost_get_controllers", 00:05:06.202 "vhost_delete_controller", 00:05:06.202 "vhost_create_blk_controller", 00:05:06.202 "vhost_scsi_controller_remove_target", 00:05:06.202 "vhost_scsi_controller_add_target", 00:05:06.202 "vhost_start_scsi_controller", 00:05:06.202 "vhost_create_scsi_controller", 00:05:06.202 "thread_set_cpumask", 00:05:06.202 "framework_get_scheduler", 00:05:06.202 "framework_set_scheduler", 00:05:06.202 "framework_get_reactors", 00:05:06.202 "thread_get_io_channels", 00:05:06.202 "thread_get_pollers", 00:05:06.202 "thread_get_stats", 00:05:06.202 "framework_monitor_context_switch", 00:05:06.202 "spdk_kill_instance", 00:05:06.202 "log_enable_timestamps", 00:05:06.202 "log_get_flags", 00:05:06.202 "log_clear_flag", 00:05:06.202 "log_set_flag", 00:05:06.202 "log_get_level", 00:05:06.202 "log_set_level", 00:05:06.202 "log_get_print_level", 00:05:06.202 "log_set_print_level", 00:05:06.202 "framework_enable_cpumask_locks", 00:05:06.202 "framework_disable_cpumask_locks", 00:05:06.202 "framework_wait_init", 00:05:06.202 "framework_start_init", 00:05:06.202 "scsi_get_devices", 00:05:06.202 "bdev_get_histogram", 00:05:06.202 "bdev_enable_histogram", 00:05:06.202 "bdev_set_qos_limit", 00:05:06.202 "bdev_set_qd_sampling_period", 00:05:06.202 "bdev_get_bdevs", 00:05:06.202 "bdev_reset_iostat", 00:05:06.202 "bdev_get_iostat", 00:05:06.202 "bdev_examine", 00:05:06.202 "bdev_wait_for_examine", 00:05:06.202 "bdev_set_options", 00:05:06.202 "notify_get_notifications", 00:05:06.202 "notify_get_types", 00:05:06.202 "accel_get_stats", 00:05:06.202 "accel_set_options", 00:05:06.202 "accel_set_driver", 00:05:06.202 "accel_crypto_key_destroy", 00:05:06.202 "accel_crypto_keys_get", 00:05:06.202 "accel_crypto_key_create", 00:05:06.202 "accel_assign_opc", 00:05:06.202 "accel_get_module_info", 00:05:06.202 "accel_get_opc_assignments", 00:05:06.202 "vmd_rescan", 00:05:06.202 "vmd_remove_device", 00:05:06.202 "vmd_enable", 00:05:06.202 "sock_set_default_impl", 00:05:06.202 "sock_impl_set_options", 00:05:06.202 "sock_impl_get_options", 00:05:06.202 "iobuf_get_stats", 00:05:06.202 "iobuf_set_options", 00:05:06.202 "keyring_get_keys", 00:05:06.202 "framework_get_pci_devices", 00:05:06.202 "framework_get_config", 00:05:06.202 "framework_get_subsystems", 00:05:06.202 "vfu_tgt_set_base_path", 00:05:06.202 "trace_get_info", 00:05:06.202 "trace_get_tpoint_group_mask", 00:05:06.202 "trace_disable_tpoint_group", 00:05:06.202 "trace_enable_tpoint_group", 00:05:06.202 "trace_clear_tpoint_mask", 00:05:06.202 "trace_set_tpoint_mask", 00:05:06.202 "spdk_get_version", 00:05:06.202 "rpc_get_methods" 00:05:06.202 ] 00:05:06.202 03:56:20 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.202 03:56:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.202 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:05:06.461 03:56:20 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.461 03:56:20 -- spdkcli/tcp.sh@38 -- # killprocess 3640090 00:05:06.461 03:56:20 -- common/autotest_common.sh@936 -- # '[' -z 3640090 ']' 00:05:06.461 03:56:20 -- common/autotest_common.sh@940 -- # kill -0 3640090 00:05:06.461 03:56:20 -- common/autotest_common.sh@941 -- # uname 00:05:06.461 03:56:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.461 03:56:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3640090 00:05:06.461 03:56:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.461 03:56:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.461 03:56:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3640090' 00:05:06.461 killing process with pid 3640090 00:05:06.461 03:56:20 -- common/autotest_common.sh@955 -- # kill 3640090 00:05:06.461 03:56:20 -- common/autotest_common.sh@960 -- # wait 3640090 00:05:06.721 00:05:06.721 real 0m1.695s 00:05:06.721 user 0m3.210s 00:05:06.721 sys 0m0.456s 00:05:06.721 03:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.721 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.721 ************************************ 00:05:06.721 END TEST spdkcli_tcp 00:05:06.721 ************************************ 00:05:06.721 03:56:21 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.721 03:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.721 03:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.721 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.979 ************************************ 00:05:06.979 START TEST dpdk_mem_utility 00:05:06.979 ************************************ 00:05:06.979 03:56:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.979 * Looking for test storage... 00:05:06.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:06.980 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:06.980 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3640443 00:05:06.980 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3640443 00:05:06.980 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.980 03:56:21 -- common/autotest_common.sh@817 -- # '[' -z 3640443 ']' 00:05:06.980 03:56:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.980 03:56:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:06.980 03:56:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.980 03:56:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:06.980 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:06.980 [2024-04-19 03:56:21.470302] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:06.980 [2024-04-19 03:56:21.470369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640443 ] 00:05:06.980 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.238 [2024-04-19 03:56:21.544718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.238 [2024-04-19 03:56:21.632908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.498 03:56:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.498 03:56:21 -- common/autotest_common.sh@850 -- # return 0 00:05:07.498 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.498 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.498 03:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.498 03:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.498 { 00:05:07.498 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.498 } 00:05:07.498 03:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:07.498 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:07.498 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:07.498 1 heaps totaling size 814.000000 MiB 00:05:07.498 size: 814.000000 MiB heap id: 0 00:05:07.498 end heaps---------- 00:05:07.498 8 mempools totaling size 598.116089 MiB 00:05:07.498 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.498 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.498 size: 84.521057 MiB name: bdev_io_3640443 00:05:07.498 size: 51.011292 MiB name: evtpool_3640443 00:05:07.498 size: 50.003479 MiB name: msgpool_3640443 00:05:07.498 size: 21.763794 MiB name: PDU_Pool 00:05:07.498 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.498 size: 0.026123 MiB name: Session_Pool 00:05:07.498 end mempools------- 00:05:07.498 6 memzones totaling size 4.142822 MiB 00:05:07.498 size: 1.000366 MiB name: RG_ring_0_3640443 00:05:07.498 size: 1.000366 MiB name: RG_ring_1_3640443 00:05:07.498 size: 1.000366 MiB name: RG_ring_4_3640443 00:05:07.498 size: 1.000366 MiB name: RG_ring_5_3640443 00:05:07.498 size: 0.125366 MiB name: RG_ring_2_3640443 00:05:07.498 size: 0.015991 MiB name: RG_ring_3_3640443 00:05:07.498 end memzones------- 00:05:07.498 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.498 heap id: 0 total size: 814.000000 MiB number of busy elements: 42 number of free elements: 15 00:05:07.498 list of free elements. size: 12.517212 MiB 00:05:07.498 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:07.498 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:07.498 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:07.498 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:07.498 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:07.498 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:07.498 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:07.498 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:07.498 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:07.498 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:07.498 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:07.498 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:07.498 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:07.498 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:07.498 element at address: 0x200003a00000 with size: 0.353394 MiB 00:05:07.498 list of standard malloc elements. size: 199.220215 MiB 00:05:07.498 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:07.498 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:07.499 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:07.499 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:07.499 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.499 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.499 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:07.499 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.499 element at address: 0x200003aff280 with size: 0.002136 MiB 00:05:07.499 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:07.499 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003adaa40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003adac40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003adef00 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003aff1c0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:07.499 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:07.499 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:07.499 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:07.499 list of memzone associated elements. size: 602.262573 MiB 00:05:07.499 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:07.499 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.499 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:07.499 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.499 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:07.499 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3640443_0 00:05:07.499 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:07.499 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3640443_0 00:05:07.499 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:07.499 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3640443_0 00:05:07.499 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:07.499 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.499 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:07.499 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.499 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:07.499 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3640443 00:05:07.499 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:07.499 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3640443 00:05:07.499 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.499 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3640443 00:05:07.499 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:07.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.499 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:07.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.499 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:07.499 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.499 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:07.499 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.499 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:07.499 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3640443 00:05:07.499 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:07.499 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3640443 00:05:07.499 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:07.499 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3640443 00:05:07.499 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:07.499 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3640443 00:05:07.499 element at address: 0x200003a5a840 with size: 0.500488 MiB 00:05:07.499 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3640443 00:05:07.499 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:07.499 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.499 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:07.499 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.499 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:07.499 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.499 element at address: 0x200003adefc0 with size: 0.125488 MiB 00:05:07.499 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3640443 00:05:07.499 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:07.499 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.499 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:07.499 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.499 element at address: 0x200003adad00 with size: 0.016113 MiB 00:05:07.499 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3640443 00:05:07.499 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:07.499 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.499 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:07.499 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3640443 00:05:07.499 element at address: 0x200003adab00 with size: 0.000305 MiB 00:05:07.499 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3640443 00:05:07.499 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:07.499 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.499 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.499 03:56:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3640443 00:05:07.499 03:56:21 -- common/autotest_common.sh@936 -- # '[' -z 3640443 ']' 00:05:07.499 03:56:21 -- common/autotest_common.sh@940 -- # kill -0 3640443 00:05:07.499 03:56:21 -- common/autotest_common.sh@941 -- # uname 00:05:07.499 03:56:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.499 03:56:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3640443 00:05:07.499 03:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.499 03:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.499 03:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3640443' 00:05:07.499 killing process with pid 3640443 00:05:07.499 03:56:22 -- common/autotest_common.sh@955 -- # kill 3640443 00:05:07.499 03:56:22 -- common/autotest_common.sh@960 -- # wait 3640443 00:05:08.068 00:05:08.068 real 0m1.057s 00:05:08.068 user 0m1.047s 00:05:08.068 sys 0m0.408s 00:05:08.068 03:56:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.068 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.068 ************************************ 00:05:08.068 END TEST dpdk_mem_utility 00:05:08.068 ************************************ 00:05:08.068 03:56:22 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:08.068 03:56:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.068 03:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.068 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.068 ************************************ 00:05:08.068 START TEST event 00:05:08.068 ************************************ 00:05:08.068 03:56:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:08.327 * Looking for test storage... 00:05:08.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:08.327 03:56:22 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:08.327 03:56:22 -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.327 03:56:22 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.327 03:56:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:08.327 03:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.327 03:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:08.327 ************************************ 00:05:08.327 START TEST event_perf 00:05:08.327 ************************************ 00:05:08.327 03:56:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.328 Running I/O for 1 seconds...[2024-04-19 03:56:22.803329] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:08.328 [2024-04-19 03:56:22.803398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640769 ] 00:05:08.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.586 [2024-04-19 03:56:22.876665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.586 [2024-04-19 03:56:22.966174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.586 [2024-04-19 03:56:22.966287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.586 [2024-04-19 03:56:22.966387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.587 [2024-04-19 03:56:22.966530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.968 Running I/O for 1 seconds... 00:05:09.968 lcore 0: 161714 00:05:09.968 lcore 1: 161715 00:05:09.968 lcore 2: 161715 00:05:09.968 lcore 3: 161714 00:05:09.968 done. 00:05:09.968 00:05:09.968 real 0m1.282s 00:05:09.968 user 0m4.186s 00:05:09.968 sys 0m0.091s 00:05:09.968 03:56:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.968 03:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.968 ************************************ 00:05:09.968 END TEST event_perf 00:05:09.968 ************************************ 00:05:09.968 03:56:24 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.968 03:56:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:09.968 03:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.968 03:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:09.968 ************************************ 00:05:09.968 START TEST event_reactor 00:05:09.968 ************************************ 00:05:09.968 03:56:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.968 [2024-04-19 03:56:24.260025] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:09.968 [2024-04-19 03:56:24.260097] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641056 ] 00:05:09.968 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.968 [2024-04-19 03:56:24.343782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.968 [2024-04-19 03:56:24.432077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.346 test_start 00:05:11.346 oneshot 00:05:11.346 tick 100 00:05:11.346 tick 100 00:05:11.346 tick 250 00:05:11.346 tick 100 00:05:11.346 tick 100 00:05:11.346 tick 250 00:05:11.346 tick 100 00:05:11.346 tick 500 00:05:11.346 tick 100 00:05:11.346 tick 100 00:05:11.346 tick 250 00:05:11.346 tick 100 00:05:11.346 tick 100 00:05:11.346 test_end 00:05:11.346 00:05:11.346 real 0m1.287s 00:05:11.346 user 0m1.187s 00:05:11.346 sys 0m0.094s 00:05:11.346 03:56:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.346 03:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.346 ************************************ 00:05:11.346 END TEST event_reactor 00:05:11.346 ************************************ 00:05:11.346 03:56:25 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.347 03:56:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:11.347 03:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.347 03:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 START TEST event_reactor_perf 00:05:11.347 ************************************ 00:05:11.347 03:56:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.347 [2024-04-19 03:56:25.728872] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:11.347 [2024-04-19 03:56:25.728949] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641352 ] 00:05:11.347 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.347 [2024-04-19 03:56:25.815563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.606 [2024-04-19 03:56:25.911376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.543 test_start 00:05:12.543 test_end 00:05:12.543 Performance: 311420 events per second 00:05:12.543 00:05:12.543 real 0m1.305s 00:05:12.543 user 0m1.198s 00:05:12.543 sys 0m0.102s 00:05:12.543 03:56:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.543 03:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.543 ************************************ 00:05:12.543 END TEST event_reactor_perf 00:05:12.543 ************************************ 00:05:12.543 03:56:27 -- event/event.sh@49 -- # uname -s 00:05:12.543 03:56:27 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.543 03:56:27 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:12.543 03:56:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.543 03:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.543 03:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:12.802 ************************************ 00:05:12.802 START TEST event_scheduler 00:05:12.802 ************************************ 00:05:12.802 03:56:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:12.802 * Looking for test storage... 00:05:12.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:12.802 03:56:27 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.802 03:56:27 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3641730 00:05:12.802 03:56:27 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.802 03:56:27 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.802 03:56:27 -- scheduler/scheduler.sh@37 -- # waitforlisten 3641730 00:05:12.802 03:56:27 -- common/autotest_common.sh@817 -- # '[' -z 3641730 ']' 00:05:12.802 03:56:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.802 03:56:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.802 03:56:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.802 03:56:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.802 03:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 [2024-04-19 03:56:27.331706] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:13.062 [2024-04-19 03:56:27.331765] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641730 ] 00:05:13.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.062 [2024-04-19 03:56:27.414965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.062 [2024-04-19 03:56:27.508292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.062 [2024-04-19 03:56:27.508389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.062 [2024-04-19 03:56:27.508440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.062 [2024-04-19 03:56:27.508443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.999 03:56:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.999 03:56:28 -- common/autotest_common.sh@850 -- # return 0 00:05:13.999 03:56:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.999 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.999 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:13.999 POWER: Env isn't set yet! 00:05:13.999 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:13.999 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.999 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.999 POWER: Attempting to initialise PSTAT power management... 00:05:13.999 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:13.999 POWER: Initialized successfully for lcore 0 power management 00:05:13.999 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:13.999 POWER: Initialized successfully for lcore 1 power management 00:05:13.999 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:13.999 POWER: Initialized successfully for lcore 2 power management 00:05:13.999 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:13.999 POWER: Initialized successfully for lcore 3 power management 00:05:13.999 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.999 03:56:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.999 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.999 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:13.999 [2024-04-19 03:56:28.307528] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.999 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.999 03:56:28 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.999 03:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.999 03:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.999 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:13.999 ************************************ 00:05:14.000 START TEST scheduler_create_thread 00:05:14.000 ************************************ 00:05:14.000 03:56:28 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 2 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 3 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 4 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 5 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 6 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.000 7 00:05:14.000 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.000 03:56:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.000 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.000 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.258 8 00:05:14.258 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.259 03:56:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.259 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.259 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.259 9 00:05:14.259 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.259 03:56:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.259 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.259 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.259 10 00:05:14.259 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.259 03:56:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.259 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.259 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.517 03:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:14.518 03:56:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.518 03:56:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.518 03:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:14.518 03:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.454 03:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:15.454 03:56:29 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.454 03:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:15.454 03:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:16.390 03:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.390 03:56:30 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.390 03:56:30 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.390 03:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.390 03:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:17.328 03:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.328 00:05:17.328 real 0m3.231s 00:05:17.328 user 0m0.023s 00:05:17.328 sys 0m0.006s 00:05:17.328 03:56:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.328 03:56:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.328 ************************************ 00:05:17.328 END TEST scheduler_create_thread 00:05:17.328 ************************************ 00:05:17.328 03:56:31 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:17.328 03:56:31 -- scheduler/scheduler.sh@46 -- # killprocess 3641730 00:05:17.328 03:56:31 -- common/autotest_common.sh@936 -- # '[' -z 3641730 ']' 00:05:17.328 03:56:31 -- common/autotest_common.sh@940 -- # kill -0 3641730 00:05:17.328 03:56:31 -- common/autotest_common.sh@941 -- # uname 00:05:17.328 03:56:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.328 03:56:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3641730 00:05:17.328 03:56:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:17.328 03:56:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:17.328 03:56:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3641730' 00:05:17.328 killing process with pid 3641730 00:05:17.328 03:56:31 -- common/autotest_common.sh@955 -- # kill 3641730 00:05:17.328 03:56:31 -- common/autotest_common.sh@960 -- # wait 3641730 00:05:17.587 [2024-04-19 03:56:32.057067] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.847 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:17.847 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:17.847 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:17.847 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:17.847 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:17.847 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:17.847 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:17.847 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:17.847 00:05:17.847 real 0m5.142s 00:05:17.847 user 0m10.518s 00:05:17.847 sys 0m0.445s 00:05:17.847 03:56:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.847 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:17.847 ************************************ 00:05:17.847 END TEST event_scheduler 00:05:17.847 ************************************ 00:05:17.847 03:56:32 -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.847 03:56:32 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.847 03:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.847 03:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.847 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.107 ************************************ 00:05:18.107 START TEST app_repeat 00:05:18.107 ************************************ 00:05:18.107 03:56:32 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:18.107 03:56:32 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.107 03:56:32 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.107 03:56:32 -- event/event.sh@13 -- # local nbd_list 00:05:18.107 03:56:32 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.107 03:56:32 -- event/event.sh@14 -- # local bdev_list 00:05:18.107 03:56:32 -- event/event.sh@15 -- # local repeat_times=4 00:05:18.107 03:56:32 -- event/event.sh@17 -- # modprobe nbd 00:05:18.107 03:56:32 -- event/event.sh@19 -- # repeat_pid=3642778 00:05:18.107 03:56:32 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.107 03:56:32 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:18.107 03:56:32 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3642778' 00:05:18.107 Process app_repeat pid: 3642778 00:05:18.107 03:56:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:18.107 03:56:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:18.107 spdk_app_start Round 0 00:05:18.107 03:56:32 -- event/event.sh@25 -- # waitforlisten 3642778 /var/tmp/spdk-nbd.sock 00:05:18.107 03:56:32 -- common/autotest_common.sh@817 -- # '[' -z 3642778 ']' 00:05:18.107 03:56:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.107 03:56:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.107 03:56:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.107 03:56:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.107 03:56:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.107 [2024-04-19 03:56:32.547678] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:18.107 [2024-04-19 03:56:32.547739] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642778 ] 00:05:18.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.107 [2024-04-19 03:56:32.630666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.366 [2024-04-19 03:56:32.720284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.366 [2024-04-19 03:56:32.720290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.366 03:56:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.366 03:56:32 -- common/autotest_common.sh@850 -- # return 0 00:05:18.366 03:56:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.625 Malloc0 00:05:18.625 03:56:33 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.883 Malloc1 00:05:18.883 03:56:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@12 -- # local i 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.883 03:56:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.142 /dev/nbd0 00:05:19.142 03:56:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.142 03:56:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.142 03:56:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:19.142 03:56:33 -- common/autotest_common.sh@855 -- # local i 00:05:19.142 03:56:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:19.142 03:56:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:19.142 03:56:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:19.142 03:56:33 -- common/autotest_common.sh@859 -- # break 00:05:19.142 03:56:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:19.142 03:56:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:19.142 03:56:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.142 1+0 records in 00:05:19.142 1+0 records out 00:05:19.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182942 s, 22.4 MB/s 00:05:19.142 03:56:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.142 03:56:33 -- common/autotest_common.sh@872 -- # size=4096 00:05:19.142 03:56:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.142 03:56:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:19.142 03:56:33 -- common/autotest_common.sh@875 -- # return 0 00:05:19.142 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.142 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.142 03:56:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.401 /dev/nbd1 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.401 03:56:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:19.401 03:56:33 -- common/autotest_common.sh@855 -- # local i 00:05:19.401 03:56:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:19.401 03:56:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:19.401 03:56:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:19.401 03:56:33 -- common/autotest_common.sh@859 -- # break 00:05:19.401 03:56:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:19.401 03:56:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:19.401 03:56:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.401 1+0 records in 00:05:19.401 1+0 records out 00:05:19.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222142 s, 18.4 MB/s 00:05:19.401 03:56:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.401 03:56:33 -- common/autotest_common.sh@872 -- # size=4096 00:05:19.401 03:56:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:19.401 03:56:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:19.401 03:56:33 -- common/autotest_common.sh@875 -- # return 0 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.401 03:56:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.660 { 00:05:19.660 "nbd_device": "/dev/nbd0", 00:05:19.660 "bdev_name": "Malloc0" 00:05:19.660 }, 00:05:19.660 { 00:05:19.660 "nbd_device": "/dev/nbd1", 00:05:19.660 "bdev_name": "Malloc1" 00:05:19.660 } 00:05:19.660 ]' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.660 { 00:05:19.660 "nbd_device": "/dev/nbd0", 00:05:19.660 "bdev_name": "Malloc0" 00:05:19.660 }, 00:05:19.660 { 00:05:19.660 "nbd_device": "/dev/nbd1", 00:05:19.660 "bdev_name": "Malloc1" 00:05:19.660 } 00:05:19.660 ]' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.660 /dev/nbd1' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.660 /dev/nbd1' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.660 03:56:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.661 03:56:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.661 256+0 records in 00:05:19.661 256+0 records out 00:05:19.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00976844 s, 107 MB/s 00:05:19.661 03:56:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.919 03:56:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.919 256+0 records in 00:05:19.919 256+0 records out 00:05:19.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191649 s, 54.7 MB/s 00:05:19.919 03:56:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.919 03:56:34 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.920 256+0 records in 00:05:19.920 256+0 records out 00:05:19.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206982 s, 50.7 MB/s 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@51 -- # local i 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.920 03:56:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@41 -- # break 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.179 03:56:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@41 -- # break 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.437 03:56:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@65 -- # true 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.696 03:56:35 -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.696 03:56:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.955 03:56:35 -- event/event.sh@35 -- # sleep 3 00:05:21.214 [2024-04-19 03:56:35.554511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.214 [2024-04-19 03:56:35.636719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.214 [2024-04-19 03:56:35.636723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.214 [2024-04-19 03:56:35.682027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.214 [2024-04-19 03:56:35.682075] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.504 03:56:38 -- event/event.sh@23 -- # for i in {0..2} 00:05:24.504 03:56:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.504 spdk_app_start Round 1 00:05:24.504 03:56:38 -- event/event.sh@25 -- # waitforlisten 3642778 /var/tmp/spdk-nbd.sock 00:05:24.504 03:56:38 -- common/autotest_common.sh@817 -- # '[' -z 3642778 ']' 00:05:24.504 03:56:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.504 03:56:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.504 03:56:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.504 03:56:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.504 03:56:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.504 03:56:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.504 03:56:38 -- common/autotest_common.sh@850 -- # return 0 00:05:24.504 03:56:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.504 Malloc0 00:05:24.504 03:56:38 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.763 Malloc1 00:05:24.763 03:56:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.763 03:56:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.763 03:56:39 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@12 -- # local i 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.764 03:56:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.023 /dev/nbd0 00:05:25.023 03:56:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.023 03:56:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.023 03:56:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:25.023 03:56:39 -- common/autotest_common.sh@855 -- # local i 00:05:25.023 03:56:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:25.023 03:56:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:25.023 03:56:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:25.023 03:56:39 -- common/autotest_common.sh@859 -- # break 00:05:25.023 03:56:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:25.023 03:56:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:25.023 03:56:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.023 1+0 records in 00:05:25.023 1+0 records out 00:05:25.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149951 s, 27.3 MB/s 00:05:25.023 03:56:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.023 03:56:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:25.023 03:56:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.023 03:56:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:25.023 03:56:39 -- common/autotest_common.sh@875 -- # return 0 00:05:25.023 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.023 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.023 03:56:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.282 /dev/nbd1 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.282 03:56:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:25.282 03:56:39 -- common/autotest_common.sh@855 -- # local i 00:05:25.282 03:56:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:25.282 03:56:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:25.282 03:56:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:25.282 03:56:39 -- common/autotest_common.sh@859 -- # break 00:05:25.282 03:56:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:25.282 03:56:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:25.282 03:56:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.282 1+0 records in 00:05:25.282 1+0 records out 00:05:25.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208284 s, 19.7 MB/s 00:05:25.282 03:56:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.282 03:56:39 -- common/autotest_common.sh@872 -- # size=4096 00:05:25.282 03:56:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.282 03:56:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:25.282 03:56:39 -- common/autotest_common.sh@875 -- # return 0 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.282 03:56:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.541 03:56:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.541 { 00:05:25.541 "nbd_device": "/dev/nbd0", 00:05:25.541 "bdev_name": "Malloc0" 00:05:25.542 }, 00:05:25.542 { 00:05:25.542 "nbd_device": "/dev/nbd1", 00:05:25.542 "bdev_name": "Malloc1" 00:05:25.542 } 00:05:25.542 ]' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.542 { 00:05:25.542 "nbd_device": "/dev/nbd0", 00:05:25.542 "bdev_name": "Malloc0" 00:05:25.542 }, 00:05:25.542 { 00:05:25.542 "nbd_device": "/dev/nbd1", 00:05:25.542 "bdev_name": "Malloc1" 00:05:25.542 } 00:05:25.542 ]' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.542 /dev/nbd1' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.542 /dev/nbd1' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.542 256+0 records in 00:05:25.542 256+0 records out 00:05:25.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00984509 s, 107 MB/s 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.542 256+0 records in 00:05:25.542 256+0 records out 00:05:25.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193634 s, 54.2 MB/s 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.542 256+0 records in 00:05:25.542 256+0 records out 00:05:25.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209637 s, 50.0 MB/s 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@51 -- # local i 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.542 03:56:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@41 -- # break 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.801 03:56:40 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@41 -- # break 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.060 03:56:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@65 -- # true 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.319 03:56:40 -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.319 03:56:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.578 03:56:41 -- event/event.sh@35 -- # sleep 3 00:05:26.854 [2024-04-19 03:56:41.286009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.854 [2024-04-19 03:56:41.360702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.855 [2024-04-19 03:56:41.360706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.114 [2024-04-19 03:56:41.407082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.114 [2024-04-19 03:56:41.407129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.647 03:56:44 -- event/event.sh@23 -- # for i in {0..2} 00:05:29.647 03:56:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.647 spdk_app_start Round 2 00:05:29.647 03:56:44 -- event/event.sh@25 -- # waitforlisten 3642778 /var/tmp/spdk-nbd.sock 00:05:29.647 03:56:44 -- common/autotest_common.sh@817 -- # '[' -z 3642778 ']' 00:05:29.647 03:56:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.647 03:56:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.647 03:56:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.647 03:56:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.647 03:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.906 03:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.906 03:56:44 -- common/autotest_common.sh@850 -- # return 0 00:05:29.906 03:56:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.165 Malloc0 00:05:30.165 03:56:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.424 Malloc1 00:05:30.424 03:56:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.424 03:56:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.682 /dev/nbd0 00:05:30.682 03:56:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.682 03:56:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.683 03:56:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:30.683 03:56:45 -- common/autotest_common.sh@855 -- # local i 00:05:30.683 03:56:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:30.683 03:56:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:30.683 03:56:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:30.683 03:56:45 -- common/autotest_common.sh@859 -- # break 00:05:30.683 03:56:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.683 03:56:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.683 03:56:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.683 1+0 records in 00:05:30.683 1+0 records out 00:05:30.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244289 s, 16.8 MB/s 00:05:30.683 03:56:45 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.683 03:56:45 -- common/autotest_common.sh@872 -- # size=4096 00:05:30.683 03:56:45 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.683 03:56:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:30.683 03:56:45 -- common/autotest_common.sh@875 -- # return 0 00:05:30.683 03:56:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.683 03:56:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.683 03:56:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.941 /dev/nbd1 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.941 03:56:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:30.941 03:56:45 -- common/autotest_common.sh@855 -- # local i 00:05:30.941 03:56:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:30.941 03:56:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:30.941 03:56:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:30.941 03:56:45 -- common/autotest_common.sh@859 -- # break 00:05:30.941 03:56:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.941 03:56:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.941 03:56:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.941 1+0 records in 00:05:30.941 1+0 records out 00:05:30.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193637 s, 21.2 MB/s 00:05:30.941 03:56:45 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.941 03:56:45 -- common/autotest_common.sh@872 -- # size=4096 00:05:30.941 03:56:45 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.941 03:56:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:30.941 03:56:45 -- common/autotest_common.sh@875 -- # return 0 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.941 03:56:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.200 { 00:05:31.200 "nbd_device": "/dev/nbd0", 00:05:31.200 "bdev_name": "Malloc0" 00:05:31.200 }, 00:05:31.200 { 00:05:31.200 "nbd_device": "/dev/nbd1", 00:05:31.200 "bdev_name": "Malloc1" 00:05:31.200 } 00:05:31.200 ]' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.200 { 00:05:31.200 "nbd_device": "/dev/nbd0", 00:05:31.200 "bdev_name": "Malloc0" 00:05:31.200 }, 00:05:31.200 { 00:05:31.200 "nbd_device": "/dev/nbd1", 00:05:31.200 "bdev_name": "Malloc1" 00:05:31.200 } 00:05:31.200 ]' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.200 /dev/nbd1' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.200 /dev/nbd1' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.200 256+0 records in 00:05:31.200 256+0 records out 00:05:31.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103028 s, 102 MB/s 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.200 256+0 records in 00:05:31.200 256+0 records out 00:05:31.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196572 s, 53.3 MB/s 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.200 256+0 records in 00:05:31.200 256+0 records out 00:05:31.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203212 s, 51.6 MB/s 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.200 03:56:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.460 03:56:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.718 03:56:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@41 -- # break 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.718 03:56:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@41 -- # break 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.976 03:56:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@65 -- # true 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.234 03:56:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.234 03:56:46 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.493 03:56:46 -- event/event.sh@35 -- # sleep 3 00:05:32.751 [2024-04-19 03:56:47.037161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.751 [2024-04-19 03:56:47.114445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.751 [2024-04-19 03:56:47.114450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.751 [2024-04-19 03:56:47.160749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.751 [2024-04-19 03:56:47.160795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.039 03:56:49 -- event/event.sh@38 -- # waitforlisten 3642778 /var/tmp/spdk-nbd.sock 00:05:36.039 03:56:49 -- common/autotest_common.sh@817 -- # '[' -z 3642778 ']' 00:05:36.039 03:56:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.039 03:56:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.039 03:56:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.039 03:56:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.039 03:56:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.039 03:56:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.039 03:56:50 -- common/autotest_common.sh@850 -- # return 0 00:05:36.039 03:56:50 -- event/event.sh@39 -- # killprocess 3642778 00:05:36.039 03:56:50 -- common/autotest_common.sh@936 -- # '[' -z 3642778 ']' 00:05:36.039 03:56:50 -- common/autotest_common.sh@940 -- # kill -0 3642778 00:05:36.039 03:56:50 -- common/autotest_common.sh@941 -- # uname 00:05:36.039 03:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.039 03:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3642778 00:05:36.039 03:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.039 03:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.039 03:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3642778' 00:05:36.039 killing process with pid 3642778 00:05:36.039 03:56:50 -- common/autotest_common.sh@955 -- # kill 3642778 00:05:36.039 03:56:50 -- common/autotest_common.sh@960 -- # wait 3642778 00:05:36.039 spdk_app_start is called in Round 0. 00:05:36.039 Shutdown signal received, stop current app iteration 00:05:36.039 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:05:36.039 spdk_app_start is called in Round 1. 00:05:36.039 Shutdown signal received, stop current app iteration 00:05:36.039 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:05:36.039 spdk_app_start is called in Round 2. 00:05:36.039 Shutdown signal received, stop current app iteration 00:05:36.039 Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 reinitialization... 00:05:36.039 spdk_app_start is called in Round 3. 00:05:36.039 Shutdown signal received, stop current app iteration 00:05:36.039 03:56:50 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:36.039 03:56:50 -- event/event.sh@42 -- # return 0 00:05:36.039 00:05:36.039 real 0m17.783s 00:05:36.039 user 0m39.496s 00:05:36.039 sys 0m2.827s 00:05:36.039 03:56:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.039 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.039 ************************************ 00:05:36.039 END TEST app_repeat 00:05:36.039 ************************************ 00:05:36.039 03:56:50 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.039 03:56:50 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.039 03:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.039 03:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.039 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.039 ************************************ 00:05:36.039 START TEST cpu_locks 00:05:36.039 ************************************ 00:05:36.039 03:56:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.039 * Looking for test storage... 00:05:36.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.039 03:56:50 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.039 03:56:50 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.039 03:56:50 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.039 03:56:50 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.039 03:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.039 03:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.039 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.299 ************************************ 00:05:36.299 START TEST default_locks 00:05:36.299 ************************************ 00:05:36.299 03:56:50 -- common/autotest_common.sh@1111 -- # default_locks 00:05:36.299 03:56:50 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3646419 00:05:36.299 03:56:50 -- event/cpu_locks.sh@47 -- # waitforlisten 3646419 00:05:36.299 03:56:50 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.299 03:56:50 -- common/autotest_common.sh@817 -- # '[' -z 3646419 ']' 00:05:36.299 03:56:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.299 03:56:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:36.299 03:56:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.299 03:56:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:36.299 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.299 [2024-04-19 03:56:50.699746] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:36.299 [2024-04-19 03:56:50.699801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646419 ] 00:05:36.299 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.299 [2024-04-19 03:56:50.780291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.558 [2024-04-19 03:56:50.867061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.126 03:56:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.126 03:56:51 -- common/autotest_common.sh@850 -- # return 0 00:05:37.126 03:56:51 -- event/cpu_locks.sh@49 -- # locks_exist 3646419 00:05:37.126 03:56:51 -- event/cpu_locks.sh@22 -- # lslocks -p 3646419 00:05:37.126 03:56:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.693 lslocks: write error 00:05:37.693 03:56:51 -- event/cpu_locks.sh@50 -- # killprocess 3646419 00:05:37.693 03:56:51 -- common/autotest_common.sh@936 -- # '[' -z 3646419 ']' 00:05:37.693 03:56:51 -- common/autotest_common.sh@940 -- # kill -0 3646419 00:05:37.693 03:56:51 -- common/autotest_common.sh@941 -- # uname 00:05:37.693 03:56:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.693 03:56:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3646419 00:05:37.693 03:56:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.693 03:56:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.693 03:56:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3646419' 00:05:37.693 killing process with pid 3646419 00:05:37.693 03:56:52 -- common/autotest_common.sh@955 -- # kill 3646419 00:05:37.693 03:56:52 -- common/autotest_common.sh@960 -- # wait 3646419 00:05:37.952 03:56:52 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3646419 00:05:37.952 03:56:52 -- common/autotest_common.sh@638 -- # local es=0 00:05:37.952 03:56:52 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3646419 00:05:37.952 03:56:52 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:37.952 03:56:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.952 03:56:52 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:37.952 03:56:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.952 03:56:52 -- common/autotest_common.sh@641 -- # waitforlisten 3646419 00:05:37.952 03:56:52 -- common/autotest_common.sh@817 -- # '[' -z 3646419 ']' 00:05:37.952 03:56:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.952 03:56:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.952 03:56:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.952 03:56:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.952 03:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3646419) - No such process 00:05:37.952 ERROR: process (pid: 3646419) is no longer running 00:05:37.952 03:56:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.952 03:56:52 -- common/autotest_common.sh@850 -- # return 1 00:05:37.952 03:56:52 -- common/autotest_common.sh@641 -- # es=1 00:05:37.952 03:56:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.952 03:56:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:37.952 03:56:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.952 03:56:52 -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.952 03:56:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.952 03:56:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.952 03:56:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.952 00:05:37.952 real 0m1.772s 00:05:37.952 user 0m1.943s 00:05:37.952 sys 0m0.565s 00:05:37.952 03:56:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.952 03:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:37.952 ************************************ 00:05:37.952 END TEST default_locks 00:05:37.952 ************************************ 00:05:37.952 03:56:52 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.952 03:56:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.952 03:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.952 03:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.211 ************************************ 00:05:38.211 START TEST default_locks_via_rpc 00:05:38.211 ************************************ 00:05:38.211 03:56:52 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:38.211 03:56:52 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3646724 00:05:38.211 03:56:52 -- event/cpu_locks.sh@63 -- # waitforlisten 3646724 00:05:38.211 03:56:52 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.211 03:56:52 -- common/autotest_common.sh@817 -- # '[' -z 3646724 ']' 00:05:38.211 03:56:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.211 03:56:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.211 03:56:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.211 03:56:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.211 03:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.211 [2024-04-19 03:56:52.649600] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:38.211 [2024-04-19 03:56:52.649654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646724 ] 00:05:38.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.211 [2024-04-19 03:56:52.730081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.469 [2024-04-19 03:56:52.819895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.728 03:56:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.728 03:56:53 -- common/autotest_common.sh@850 -- # return 0 00:05:38.728 03:56:53 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.728 03:56:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.728 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.728 03:56:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.728 03:56:53 -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.728 03:56:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.728 03:56:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.728 03:56:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.728 03:56:53 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.728 03:56:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.728 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.728 03:56:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.728 03:56:53 -- event/cpu_locks.sh@71 -- # locks_exist 3646724 00:05:38.728 03:56:53 -- event/cpu_locks.sh@22 -- # lslocks -p 3646724 00:05:38.728 03:56:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.728 03:56:53 -- event/cpu_locks.sh@73 -- # killprocess 3646724 00:05:38.728 03:56:53 -- common/autotest_common.sh@936 -- # '[' -z 3646724 ']' 00:05:38.728 03:56:53 -- common/autotest_common.sh@940 -- # kill -0 3646724 00:05:38.728 03:56:53 -- common/autotest_common.sh@941 -- # uname 00:05:38.728 03:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.728 03:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3646724 00:05:38.987 03:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.987 03:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.987 03:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3646724' 00:05:38.987 killing process with pid 3646724 00:05:38.987 03:56:53 -- common/autotest_common.sh@955 -- # kill 3646724 00:05:38.987 03:56:53 -- common/autotest_common.sh@960 -- # wait 3646724 00:05:39.246 00:05:39.246 real 0m1.067s 00:05:39.246 user 0m1.053s 00:05:39.246 sys 0m0.472s 00:05:39.246 03:56:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.246 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.246 ************************************ 00:05:39.246 END TEST default_locks_via_rpc 00:05:39.246 ************************************ 00:05:39.246 03:56:53 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.246 03:56:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.246 03:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.246 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 ************************************ 00:05:39.505 START TEST non_locking_app_on_locked_coremask 00:05:39.505 ************************************ 00:05:39.505 03:56:53 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:39.505 03:56:53 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3647017 00:05:39.505 03:56:53 -- event/cpu_locks.sh@81 -- # waitforlisten 3647017 /var/tmp/spdk.sock 00:05:39.505 03:56:53 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.505 03:56:53 -- common/autotest_common.sh@817 -- # '[' -z 3647017 ']' 00:05:39.505 03:56:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.505 03:56:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.505 03:56:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.505 03:56:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.505 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 [2024-04-19 03:56:53.868999] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:39.505 [2024-04-19 03:56:53.869052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647017 ] 00:05:39.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.505 [2024-04-19 03:56:53.950933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.764 [2024-04-19 03:56:54.040986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.332 03:56:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.332 03:56:54 -- common/autotest_common.sh@850 -- # return 0 00:05:40.332 03:56:54 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.332 03:56:54 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3647270 00:05:40.332 03:56:54 -- event/cpu_locks.sh@85 -- # waitforlisten 3647270 /var/tmp/spdk2.sock 00:05:40.332 03:56:54 -- common/autotest_common.sh@817 -- # '[' -z 3647270 ']' 00:05:40.332 03:56:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.332 03:56:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.332 03:56:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.332 03:56:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.332 03:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:40.332 [2024-04-19 03:56:54.836902] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:40.332 [2024-04-19 03:56:54.836965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647270 ] 00:05:40.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.590 [2024-04-19 03:56:54.946203] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.590 [2024-04-19 03:56:54.946234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.849 [2024-04-19 03:56:55.124170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.416 03:56:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.416 03:56:55 -- common/autotest_common.sh@850 -- # return 0 00:05:41.416 03:56:55 -- event/cpu_locks.sh@87 -- # locks_exist 3647017 00:05:41.416 03:56:55 -- event/cpu_locks.sh@22 -- # lslocks -p 3647017 00:05:41.416 03:56:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.983 lslocks: write error 00:05:41.983 03:56:56 -- event/cpu_locks.sh@89 -- # killprocess 3647017 00:05:41.983 03:56:56 -- common/autotest_common.sh@936 -- # '[' -z 3647017 ']' 00:05:41.983 03:56:56 -- common/autotest_common.sh@940 -- # kill -0 3647017 00:05:41.983 03:56:56 -- common/autotest_common.sh@941 -- # uname 00:05:41.983 03:56:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.983 03:56:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647017 00:05:41.983 03:56:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.983 03:56:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.983 03:56:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647017' 00:05:41.983 killing process with pid 3647017 00:05:41.983 03:56:56 -- common/autotest_common.sh@955 -- # kill 3647017 00:05:41.983 03:56:56 -- common/autotest_common.sh@960 -- # wait 3647017 00:05:42.550 03:56:57 -- event/cpu_locks.sh@90 -- # killprocess 3647270 00:05:42.550 03:56:57 -- common/autotest_common.sh@936 -- # '[' -z 3647270 ']' 00:05:42.551 03:56:57 -- common/autotest_common.sh@940 -- # kill -0 3647270 00:05:42.551 03:56:57 -- common/autotest_common.sh@941 -- # uname 00:05:42.551 03:56:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.551 03:56:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647270 00:05:42.809 03:56:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.809 03:56:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.809 03:56:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647270' 00:05:42.809 killing process with pid 3647270 00:05:42.809 03:56:57 -- common/autotest_common.sh@955 -- # kill 3647270 00:05:42.809 03:56:57 -- common/autotest_common.sh@960 -- # wait 3647270 00:05:43.068 00:05:43.068 real 0m3.626s 00:05:43.068 user 0m4.069s 00:05:43.068 sys 0m1.014s 00:05:43.068 03:56:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.068 03:56:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.068 ************************************ 00:05:43.068 END TEST non_locking_app_on_locked_coremask 00:05:43.068 ************************************ 00:05:43.068 03:56:57 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.068 03:56:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.068 03:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.068 03:56:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.327 ************************************ 00:05:43.327 START TEST locking_app_on_unlocked_coremask 00:05:43.327 ************************************ 00:05:43.327 03:56:57 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:43.327 03:56:57 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3647813 00:05:43.327 03:56:57 -- event/cpu_locks.sh@99 -- # waitforlisten 3647813 /var/tmp/spdk.sock 00:05:43.327 03:56:57 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.327 03:56:57 -- common/autotest_common.sh@817 -- # '[' -z 3647813 ']' 00:05:43.327 03:56:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.327 03:56:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.327 03:56:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.327 03:56:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.327 03:56:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.327 [2024-04-19 03:56:57.667137] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:43.327 [2024-04-19 03:56:57.667188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647813 ] 00:05:43.327 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.327 [2024-04-19 03:56:57.749417] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.327 [2024-04-19 03:56:57.749449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.327 [2024-04-19 03:56:57.835347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.587 03:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.587 03:56:58 -- common/autotest_common.sh@850 -- # return 0 00:05:43.587 03:56:58 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3647857 00:05:43.587 03:56:58 -- event/cpu_locks.sh@103 -- # waitforlisten 3647857 /var/tmp/spdk2.sock 00:05:43.587 03:56:58 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.587 03:56:58 -- common/autotest_common.sh@817 -- # '[' -z 3647857 ']' 00:05:43.587 03:56:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.587 03:56:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.587 03:56:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.587 03:56:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.587 03:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.587 [2024-04-19 03:56:58.101327] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:43.587 [2024-04-19 03:56:58.101388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647857 ] 00:05:43.846 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.846 [2024-04-19 03:56:58.207819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.105 [2024-04-19 03:56:58.382479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.671 03:56:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.671 03:56:59 -- common/autotest_common.sh@850 -- # return 0 00:05:44.671 03:56:59 -- event/cpu_locks.sh@105 -- # locks_exist 3647857 00:05:44.671 03:56:59 -- event/cpu_locks.sh@22 -- # lslocks -p 3647857 00:05:44.671 03:56:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.607 lslocks: write error 00:05:45.607 03:56:59 -- event/cpu_locks.sh@107 -- # killprocess 3647813 00:05:45.607 03:56:59 -- common/autotest_common.sh@936 -- # '[' -z 3647813 ']' 00:05:45.607 03:56:59 -- common/autotest_common.sh@940 -- # kill -0 3647813 00:05:45.607 03:56:59 -- common/autotest_common.sh@941 -- # uname 00:05:45.607 03:56:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.607 03:56:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647813 00:05:45.607 03:56:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.607 03:56:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.607 03:56:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647813' 00:05:45.607 killing process with pid 3647813 00:05:45.607 03:56:59 -- common/autotest_common.sh@955 -- # kill 3647813 00:05:45.607 03:56:59 -- common/autotest_common.sh@960 -- # wait 3647813 00:05:46.175 03:57:00 -- event/cpu_locks.sh@108 -- # killprocess 3647857 00:05:46.175 03:57:00 -- common/autotest_common.sh@936 -- # '[' -z 3647857 ']' 00:05:46.175 03:57:00 -- common/autotest_common.sh@940 -- # kill -0 3647857 00:05:46.175 03:57:00 -- common/autotest_common.sh@941 -- # uname 00:05:46.175 03:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.175 03:57:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3647857 00:05:46.175 03:57:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.175 03:57:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.175 03:57:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3647857' 00:05:46.175 killing process with pid 3647857 00:05:46.175 03:57:00 -- common/autotest_common.sh@955 -- # kill 3647857 00:05:46.175 03:57:00 -- common/autotest_common.sh@960 -- # wait 3647857 00:05:46.744 00:05:46.744 real 0m3.390s 00:05:46.744 user 0m3.639s 00:05:46.744 sys 0m1.120s 00:05:46.744 03:57:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.744 03:57:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 ************************************ 00:05:46.744 END TEST locking_app_on_unlocked_coremask 00:05:46.744 ************************************ 00:05:46.744 03:57:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.744 03:57:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.744 03:57:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.744 03:57:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 ************************************ 00:05:46.744 START TEST locking_app_on_locked_coremask 00:05:46.744 ************************************ 00:05:46.744 03:57:01 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:46.744 03:57:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3648500 00:05:46.744 03:57:01 -- event/cpu_locks.sh@116 -- # waitforlisten 3648500 /var/tmp/spdk.sock 00:05:46.744 03:57:01 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.744 03:57:01 -- common/autotest_common.sh@817 -- # '[' -z 3648500 ']' 00:05:46.744 03:57:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.744 03:57:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.744 03:57:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.744 03:57:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.744 03:57:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.744 [2024-04-19 03:57:01.227712] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:46.744 [2024-04-19 03:57:01.227765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648500 ] 00:05:46.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.003 [2024-04-19 03:57:01.306869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.003 [2024-04-19 03:57:01.398323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.968 03:57:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.968 03:57:02 -- common/autotest_common.sh@850 -- # return 0 00:05:47.968 03:57:02 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3648813 00:05:47.968 03:57:02 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3648813 /var/tmp/spdk2.sock 00:05:47.968 03:57:02 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.968 03:57:02 -- common/autotest_common.sh@638 -- # local es=0 00:05:47.968 03:57:02 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3648813 /var/tmp/spdk2.sock 00:05:47.968 03:57:02 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:47.968 03:57:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:47.968 03:57:02 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:47.968 03:57:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:47.968 03:57:02 -- common/autotest_common.sh@641 -- # waitforlisten 3648813 /var/tmp/spdk2.sock 00:05:47.968 03:57:02 -- common/autotest_common.sh@817 -- # '[' -z 3648813 ']' 00:05:47.968 03:57:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.968 03:57:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.968 03:57:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.968 03:57:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.968 03:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.968 [2024-04-19 03:57:02.214080] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:47.968 [2024-04-19 03:57:02.214140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648813 ] 00:05:47.968 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.968 [2024-04-19 03:57:02.323922] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3648500 has claimed it. 00:05:47.968 [2024-04-19 03:57:02.323967] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3648813) - No such process 00:05:48.537 ERROR: process (pid: 3648813) is no longer running 00:05:48.537 03:57:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.537 03:57:02 -- common/autotest_common.sh@850 -- # return 1 00:05:48.537 03:57:02 -- common/autotest_common.sh@641 -- # es=1 00:05:48.537 03:57:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:48.537 03:57:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:48.537 03:57:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:48.537 03:57:02 -- event/cpu_locks.sh@122 -- # locks_exist 3648500 00:05:48.537 03:57:02 -- event/cpu_locks.sh@22 -- # lslocks -p 3648500 00:05:48.537 03:57:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.803 lslocks: write error 00:05:48.803 03:57:03 -- event/cpu_locks.sh@124 -- # killprocess 3648500 00:05:48.803 03:57:03 -- common/autotest_common.sh@936 -- # '[' -z 3648500 ']' 00:05:48.803 03:57:03 -- common/autotest_common.sh@940 -- # kill -0 3648500 00:05:48.803 03:57:03 -- common/autotest_common.sh@941 -- # uname 00:05:48.803 03:57:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.803 03:57:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3648500 00:05:49.064 03:57:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.064 03:57:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.064 03:57:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3648500' 00:05:49.064 killing process with pid 3648500 00:05:49.064 03:57:03 -- common/autotest_common.sh@955 -- # kill 3648500 00:05:49.064 03:57:03 -- common/autotest_common.sh@960 -- # wait 3648500 00:05:49.323 00:05:49.323 real 0m2.551s 00:05:49.323 user 0m2.957s 00:05:49.323 sys 0m0.670s 00:05:49.323 03:57:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.323 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.323 ************************************ 00:05:49.323 END TEST locking_app_on_locked_coremask 00:05:49.323 ************************************ 00:05:49.323 03:57:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.323 03:57:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.323 03:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.323 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.583 ************************************ 00:05:49.583 START TEST locking_overlapped_coremask 00:05:49.583 ************************************ 00:05:49.583 03:57:03 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:49.583 03:57:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3649120 00:05:49.583 03:57:03 -- event/cpu_locks.sh@133 -- # waitforlisten 3649120 /var/tmp/spdk.sock 00:05:49.583 03:57:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.583 03:57:03 -- common/autotest_common.sh@817 -- # '[' -z 3649120 ']' 00:05:49.583 03:57:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.583 03:57:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.583 03:57:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.583 03:57:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.583 03:57:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.583 [2024-04-19 03:57:03.953431] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:49.583 [2024-04-19 03:57:03.953483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649120 ] 00:05:49.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.583 [2024-04-19 03:57:04.034434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.841 [2024-04-19 03:57:04.125862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.841 [2024-04-19 03:57:04.125974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.841 [2024-04-19 03:57:04.125978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.410 03:57:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.410 03:57:04 -- common/autotest_common.sh@850 -- # return 0 00:05:50.410 03:57:04 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3649309 00:05:50.410 03:57:04 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3649309 /var/tmp/spdk2.sock 00:05:50.410 03:57:04 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.410 03:57:04 -- common/autotest_common.sh@638 -- # local es=0 00:05:50.410 03:57:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3649309 /var/tmp/spdk2.sock 00:05:50.410 03:57:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:50.410 03:57:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.410 03:57:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:50.410 03:57:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:50.410 03:57:04 -- common/autotest_common.sh@641 -- # waitforlisten 3649309 /var/tmp/spdk2.sock 00:05:50.410 03:57:04 -- common/autotest_common.sh@817 -- # '[' -z 3649309 ']' 00:05:50.410 03:57:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.410 03:57:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.410 03:57:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.410 03:57:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.410 03:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:50.410 [2024-04-19 03:57:04.780920] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:50.410 [2024-04-19 03:57:04.780985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649309 ] 00:05:50.410 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.410 [2024-04-19 03:57:04.860275] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3649120 has claimed it. 00:05:50.410 [2024-04-19 03:57:04.860309] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3649309) - No such process 00:05:50.978 ERROR: process (pid: 3649309) is no longer running 00:05:50.978 03:57:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.978 03:57:05 -- common/autotest_common.sh@850 -- # return 1 00:05:50.978 03:57:05 -- common/autotest_common.sh@641 -- # es=1 00:05:50.978 03:57:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:50.978 03:57:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:50.978 03:57:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:50.978 03:57:05 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.978 03:57:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.978 03:57:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.978 03:57:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.978 03:57:05 -- event/cpu_locks.sh@141 -- # killprocess 3649120 00:05:50.978 03:57:05 -- common/autotest_common.sh@936 -- # '[' -z 3649120 ']' 00:05:50.978 03:57:05 -- common/autotest_common.sh@940 -- # kill -0 3649120 00:05:50.978 03:57:05 -- common/autotest_common.sh@941 -- # uname 00:05:50.978 03:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.978 03:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3649120 00:05:51.237 03:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.237 03:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.237 03:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3649120' 00:05:51.237 killing process with pid 3649120 00:05:51.237 03:57:05 -- common/autotest_common.sh@955 -- # kill 3649120 00:05:51.237 03:57:05 -- common/autotest_common.sh@960 -- # wait 3649120 00:05:51.496 00:05:51.496 real 0m1.989s 00:05:51.496 user 0m5.515s 00:05:51.496 sys 0m0.431s 00:05:51.496 03:57:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.496 03:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.496 ************************************ 00:05:51.496 END TEST locking_overlapped_coremask 00:05:51.496 ************************************ 00:05:51.496 03:57:05 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.496 03:57:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.496 03:57:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.496 03:57:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.755 ************************************ 00:05:51.755 START TEST locking_overlapped_coremask_via_rpc 00:05:51.755 ************************************ 00:05:51.755 03:57:06 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:51.755 03:57:06 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3649546 00:05:51.755 03:57:06 -- event/cpu_locks.sh@149 -- # waitforlisten 3649546 /var/tmp/spdk.sock 00:05:51.755 03:57:06 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.755 03:57:06 -- common/autotest_common.sh@817 -- # '[' -z 3649546 ']' 00:05:51.755 03:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.755 03:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.755 03:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.755 03:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.755 03:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:51.755 [2024-04-19 03:57:06.115525] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:51.755 [2024-04-19 03:57:06.115581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649546 ] 00:05:51.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.755 [2024-04-19 03:57:06.195140] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.755 [2024-04-19 03:57:06.195171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.014 [2024-04-19 03:57:06.283828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.014 [2024-04-19 03:57:06.283849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.014 [2024-04-19 03:57:06.283852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.581 03:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.581 03:57:07 -- common/autotest_common.sh@850 -- # return 0 00:05:52.581 03:57:07 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.581 03:57:07 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3649861 00:05:52.581 03:57:07 -- event/cpu_locks.sh@153 -- # waitforlisten 3649861 /var/tmp/spdk2.sock 00:05:52.581 03:57:07 -- common/autotest_common.sh@817 -- # '[' -z 3649861 ']' 00:05:52.581 03:57:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.581 03:57:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.581 03:57:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.581 03:57:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.581 03:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.581 [2024-04-19 03:57:07.104666] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:52.581 [2024-04-19 03:57:07.104729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649861 ] 00:05:52.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.839 [2024-04-19 03:57:07.184024] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.839 [2024-04-19 03:57:07.184052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.839 [2024-04-19 03:57:07.320133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.839 [2024-04-19 03:57:07.323383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.840 [2024-04-19 03:57:07.323384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:53.777 03:57:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.777 03:57:08 -- common/autotest_common.sh@850 -- # return 0 00:05:53.777 03:57:08 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.777 03:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.777 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 03:57:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.777 03:57:08 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.777 03:57:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:53.777 03:57:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.777 03:57:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:53.777 03:57:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.777 03:57:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:53.777 03:57:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.777 03:57:08 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.777 03:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.777 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:53.777 [2024-04-19 03:57:08.060405] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3649546 has claimed it. 00:05:53.777 request: 00:05:53.777 { 00:05:53.777 "method": "framework_enable_cpumask_locks", 00:05:53.777 "req_id": 1 00:05:53.777 } 00:05:53.777 Got JSON-RPC error response 00:05:53.777 response: 00:05:53.777 { 00:05:53.777 "code": -32603, 00:05:53.777 "message": "Failed to claim CPU core: 2" 00:05:53.777 } 00:05:53.777 03:57:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:53.777 03:57:08 -- common/autotest_common.sh@641 -- # es=1 00:05:53.777 03:57:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:53.777 03:57:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:53.777 03:57:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:53.777 03:57:08 -- event/cpu_locks.sh@158 -- # waitforlisten 3649546 /var/tmp/spdk.sock 00:05:53.777 03:57:08 -- common/autotest_common.sh@817 -- # '[' -z 3649546 ']' 00:05:53.777 03:57:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.777 03:57:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.777 03:57:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.777 03:57:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.777 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.036 03:57:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.036 03:57:08 -- common/autotest_common.sh@850 -- # return 0 00:05:54.036 03:57:08 -- event/cpu_locks.sh@159 -- # waitforlisten 3649861 /var/tmp/spdk2.sock 00:05:54.036 03:57:08 -- common/autotest_common.sh@817 -- # '[' -z 3649861 ']' 00:05:54.036 03:57:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.036 03:57:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.036 03:57:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.036 03:57:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.036 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.296 03:57:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.296 03:57:08 -- common/autotest_common.sh@850 -- # return 0 00:05:54.296 03:57:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:54.296 03:57:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.296 03:57:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.296 03:57:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.296 00:05:54.296 real 0m2.524s 00:05:54.296 user 0m1.235s 00:05:54.296 sys 0m0.209s 00:05:54.296 03:57:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.296 03:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.296 ************************************ 00:05:54.296 END TEST locking_overlapped_coremask_via_rpc 00:05:54.296 ************************************ 00:05:54.296 03:57:08 -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.296 03:57:08 -- event/cpu_locks.sh@15 -- # [[ -z 3649546 ]] 00:05:54.296 03:57:08 -- event/cpu_locks.sh@15 -- # killprocess 3649546 00:05:54.296 03:57:08 -- common/autotest_common.sh@936 -- # '[' -z 3649546 ']' 00:05:54.296 03:57:08 -- common/autotest_common.sh@940 -- # kill -0 3649546 00:05:54.296 03:57:08 -- common/autotest_common.sh@941 -- # uname 00:05:54.296 03:57:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.296 03:57:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3649546 00:05:54.296 03:57:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.296 03:57:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.296 03:57:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3649546' 00:05:54.296 killing process with pid 3649546 00:05:54.296 03:57:08 -- common/autotest_common.sh@955 -- # kill 3649546 00:05:54.296 03:57:08 -- common/autotest_common.sh@960 -- # wait 3649546 00:05:54.556 03:57:09 -- event/cpu_locks.sh@16 -- # [[ -z 3649861 ]] 00:05:54.556 03:57:09 -- event/cpu_locks.sh@16 -- # killprocess 3649861 00:05:54.556 03:57:09 -- common/autotest_common.sh@936 -- # '[' -z 3649861 ']' 00:05:54.556 03:57:09 -- common/autotest_common.sh@940 -- # kill -0 3649861 00:05:54.556 03:57:09 -- common/autotest_common.sh@941 -- # uname 00:05:54.556 03:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.556 03:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3649861 00:05:54.815 03:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:54.815 03:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:54.815 03:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3649861' 00:05:54.815 killing process with pid 3649861 00:05:54.815 03:57:09 -- common/autotest_common.sh@955 -- # kill 3649861 00:05:54.815 03:57:09 -- common/autotest_common.sh@960 -- # wait 3649861 00:05:55.075 03:57:09 -- event/cpu_locks.sh@18 -- # rm -f 00:05:55.075 03:57:09 -- event/cpu_locks.sh@1 -- # cleanup 00:05:55.075 03:57:09 -- event/cpu_locks.sh@15 -- # [[ -z 3649546 ]] 00:05:55.075 03:57:09 -- event/cpu_locks.sh@15 -- # killprocess 3649546 00:05:55.075 03:57:09 -- common/autotest_common.sh@936 -- # '[' -z 3649546 ']' 00:05:55.075 03:57:09 -- common/autotest_common.sh@940 -- # kill -0 3649546 00:05:55.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3649546) - No such process 00:05:55.075 03:57:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3649546 is not found' 00:05:55.075 Process with pid 3649546 is not found 00:05:55.075 03:57:09 -- event/cpu_locks.sh@16 -- # [[ -z 3649861 ]] 00:05:55.075 03:57:09 -- event/cpu_locks.sh@16 -- # killprocess 3649861 00:05:55.075 03:57:09 -- common/autotest_common.sh@936 -- # '[' -z 3649861 ']' 00:05:55.075 03:57:09 -- common/autotest_common.sh@940 -- # kill -0 3649861 00:05:55.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3649861) - No such process 00:05:55.075 03:57:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3649861 is not found' 00:05:55.075 Process with pid 3649861 is not found 00:05:55.075 03:57:09 -- event/cpu_locks.sh@18 -- # rm -f 00:05:55.075 00:05:55.075 real 0m18.976s 00:05:55.075 user 0m33.177s 00:05:55.075 sys 0m5.755s 00:05:55.075 03:57:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.075 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 ************************************ 00:05:55.075 END TEST cpu_locks 00:05:55.075 ************************************ 00:05:55.075 00:05:55.075 real 0m46.917s 00:05:55.075 user 1m30.178s 00:05:55.075 sys 0m9.980s 00:05:55.075 03:57:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.075 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.075 ************************************ 00:05:55.075 END TEST event 00:05:55.075 ************************************ 00:05:55.075 03:57:09 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:55.075 03:57:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.075 03:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.075 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.334 ************************************ 00:05:55.334 START TEST thread 00:05:55.334 ************************************ 00:05:55.334 03:57:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:55.334 * Looking for test storage... 00:05:55.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:55.334 03:57:09 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.334 03:57:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:55.334 03:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.334 03:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:55.334 ************************************ 00:05:55.334 START TEST thread_poller_perf 00:05:55.334 ************************************ 00:05:55.334 03:57:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.594 [2024-04-19 03:57:09.865332] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:55.594 [2024-04-19 03:57:09.865406] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650719 ] 00:05:55.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.594 [2024-04-19 03:57:09.948350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.594 [2024-04-19 03:57:10.041533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.594 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:56.974 ====================================== 00:05:56.974 busy:2215413536 (cyc) 00:05:56.974 total_run_count: 256000 00:05:56.974 tsc_hz: 2200000000 (cyc) 00:05:56.974 ====================================== 00:05:56.974 poller_cost: 8653 (cyc), 3933 (nsec) 00:05:56.974 00:05:56.974 real 0m1.310s 00:05:56.974 user 0m1.210s 00:05:56.974 sys 0m0.094s 00:05:56.974 03:57:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.974 03:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.974 ************************************ 00:05:56.974 END TEST thread_poller_perf 00:05:56.974 ************************************ 00:05:56.975 03:57:11 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.975 03:57:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:56.975 03:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.975 03:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:56.975 ************************************ 00:05:56.975 START TEST thread_poller_perf 00:05:56.975 ************************************ 00:05:56.975 03:57:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.975 [2024-04-19 03:57:11.333707] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:56.975 [2024-04-19 03:57:11.333770] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651012 ] 00:05:56.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.975 [2024-04-19 03:57:11.407285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.975 [2024-04-19 03:57:11.495386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.975 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:58.353 ====================================== 00:05:58.353 busy:2202515572 (cyc) 00:05:58.353 total_run_count: 3411000 00:05:58.353 tsc_hz: 2200000000 (cyc) 00:05:58.353 ====================================== 00:05:58.353 poller_cost: 645 (cyc), 293 (nsec) 00:05:58.353 00:05:58.353 real 0m1.279s 00:05:58.353 user 0m1.192s 00:05:58.353 sys 0m0.082s 00:05:58.353 03:57:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.353 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.353 ************************************ 00:05:58.353 END TEST thread_poller_perf 00:05:58.353 ************************************ 00:05:58.353 03:57:12 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:58.353 00:05:58.353 real 0m3.005s 00:05:58.353 user 0m2.574s 00:05:58.353 sys 0m0.406s 00:05:58.353 03:57:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.353 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.353 ************************************ 00:05:58.353 END TEST thread 00:05:58.353 ************************************ 00:05:58.353 03:57:12 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:58.353 03:57:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.353 03:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.353 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.353 ************************************ 00:05:58.353 START TEST accel 00:05:58.353 ************************************ 00:05:58.353 03:57:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:58.612 * Looking for test storage... 00:05:58.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:58.612 03:57:12 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:58.612 03:57:12 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:58.612 03:57:12 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.612 03:57:12 -- accel/accel.sh@62 -- # spdk_tgt_pid=3651339 00:05:58.612 03:57:12 -- accel/accel.sh@63 -- # waitforlisten 3651339 00:05:58.612 03:57:12 -- common/autotest_common.sh@817 -- # '[' -z 3651339 ']' 00:05:58.612 03:57:12 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:58.612 03:57:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.612 03:57:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.612 03:57:12 -- accel/accel.sh@61 -- # build_accel_config 00:05:58.612 03:57:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.612 03:57:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.612 03:57:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.612 03:57:12 -- common/autotest_common.sh@10 -- # set +x 00:05:58.612 03:57:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.612 03:57:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.612 03:57:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.612 03:57:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.612 03:57:12 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.612 03:57:12 -- accel/accel.sh@41 -- # jq -r . 00:05:58.612 [2024-04-19 03:57:12.944062] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:58.612 [2024-04-19 03:57:12.944109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651339 ] 00:05:58.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.612 [2024-04-19 03:57:13.015283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.612 [2024-04-19 03:57:13.101306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.870 03:57:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.870 03:57:13 -- common/autotest_common.sh@850 -- # return 0 00:05:58.870 03:57:13 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:58.870 03:57:13 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:58.870 03:57:13 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:58.870 03:57:13 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:58.870 03:57:13 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:58.870 03:57:13 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:58.870 03:57:13 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:58.870 03:57:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.871 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.871 03:57:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # IFS== 00:05:58.871 03:57:13 -- accel/accel.sh@72 -- # read -r opc module 00:05:58.871 03:57:13 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.871 03:57:13 -- accel/accel.sh@75 -- # killprocess 3651339 00:05:58.871 03:57:13 -- common/autotest_common.sh@936 -- # '[' -z 3651339 ']' 00:05:58.871 03:57:13 -- common/autotest_common.sh@940 -- # kill -0 3651339 00:05:58.871 03:57:13 -- common/autotest_common.sh@941 -- # uname 00:05:58.871 03:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.871 03:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3651339 00:05:59.129 03:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.129 03:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.129 03:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3651339' 00:05:59.129 killing process with pid 3651339 00:05:59.129 03:57:13 -- common/autotest_common.sh@955 -- # kill 3651339 00:05:59.129 03:57:13 -- common/autotest_common.sh@960 -- # wait 3651339 00:05:59.387 03:57:13 -- accel/accel.sh@76 -- # trap - ERR 00:05:59.387 03:57:13 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:59.387 03:57:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:59.387 03:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.387 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.646 03:57:13 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:59.646 03:57:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:59.646 03:57:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.646 03:57:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.646 03:57:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.646 03:57:13 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.646 03:57:13 -- accel/accel.sh@41 -- # jq -r . 00:05:59.646 03:57:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.646 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.646 03:57:13 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:59.646 03:57:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:59.646 03:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.646 03:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:59.646 ************************************ 00:05:59.646 START TEST accel_missing_filename 00:05:59.646 ************************************ 00:05:59.646 03:57:14 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:59.646 03:57:14 -- common/autotest_common.sh@638 -- # local es=0 00:05:59.646 03:57:14 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:59.646 03:57:14 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:59.646 03:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.646 03:57:14 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:59.646 03:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.646 03:57:14 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:59.646 03:57:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:59.646 03:57:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.646 03:57:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.646 03:57:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.646 03:57:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.646 03:57:14 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.646 03:57:14 -- accel/accel.sh@41 -- # jq -r . 00:05:59.646 [2024-04-19 03:57:14.129099] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:05:59.646 [2024-04-19 03:57:14.129164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651647 ] 00:05:59.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.905 [2024-04-19 03:57:14.211674] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.905 [2024-04-19 03:57:14.298053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.905 [2024-04-19 03:57:14.343019] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.905 [2024-04-19 03:57:14.406009] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:00.164 A filename is required. 00:06:00.164 03:57:14 -- common/autotest_common.sh@641 -- # es=234 00:06:00.164 03:57:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.164 03:57:14 -- common/autotest_common.sh@650 -- # es=106 00:06:00.164 03:57:14 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:00.164 03:57:14 -- common/autotest_common.sh@658 -- # es=1 00:06:00.164 03:57:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.164 00:06:00.164 real 0m0.408s 00:06:00.164 user 0m0.311s 00:06:00.164 sys 0m0.141s 00:06:00.164 03:57:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.164 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.164 ************************************ 00:06:00.164 END TEST accel_missing_filename 00:06:00.164 ************************************ 00:06:00.164 03:57:14 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.164 03:57:14 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:00.164 03:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.164 03:57:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.164 ************************************ 00:06:00.164 START TEST accel_compress_verify 00:06:00.164 ************************************ 00:06:00.164 03:57:14 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.164 03:57:14 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.164 03:57:14 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.164 03:57:14 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:00.164 03:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.164 03:57:14 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:00.164 03:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.164 03:57:14 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.164 03:57:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:00.164 03:57:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.164 03:57:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.164 03:57:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.164 03:57:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.164 03:57:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.164 03:57:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.164 03:57:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.164 03:57:14 -- accel/accel.sh@41 -- # jq -r . 00:06:00.164 [2024-04-19 03:57:14.688744] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:00.164 [2024-04-19 03:57:14.688796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3651779 ] 00:06:00.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.422 [2024-04-19 03:57:14.769894] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.422 [2024-04-19 03:57:14.856459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.422 [2024-04-19 03:57:14.901667] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.681 [2024-04-19 03:57:14.964783] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:00.681 00:06:00.681 Compression does not support the verify option, aborting. 00:06:00.681 03:57:15 -- common/autotest_common.sh@641 -- # es=161 00:06:00.681 03:57:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.681 03:57:15 -- common/autotest_common.sh@650 -- # es=33 00:06:00.681 03:57:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:00.681 03:57:15 -- common/autotest_common.sh@658 -- # es=1 00:06:00.681 03:57:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.681 00:06:00.681 real 0m0.405s 00:06:00.681 user 0m0.304s 00:06:00.681 sys 0m0.141s 00:06:00.681 03:57:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.681 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.681 ************************************ 00:06:00.681 END TEST accel_compress_verify 00:06:00.681 ************************************ 00:06:00.682 03:57:15 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:00.682 03:57:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:00.682 03:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.682 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 ************************************ 00:06:00.940 START TEST accel_wrong_workload 00:06:00.940 ************************************ 00:06:00.940 03:57:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:00.940 03:57:15 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.940 03:57:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:00.940 03:57:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.940 03:57:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:00.940 03:57:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:00.940 03:57:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.940 03:57:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.940 03:57:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.940 03:57:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.940 03:57:15 -- accel/accel.sh@41 -- # jq -r . 00:06:00.940 Unsupported workload type: foobar 00:06:00.940 [2024-04-19 03:57:15.251997] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:00.940 accel_perf options: 00:06:00.940 [-h help message] 00:06:00.940 [-q queue depth per core] 00:06:00.940 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.940 [-T number of threads per core 00:06:00.940 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.940 [-t time in seconds] 00:06:00.940 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.940 [ dif_verify, , dif_generate, dif_generate_copy 00:06:00.940 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.940 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.940 [-S for crc32c workload, use this seed value (default 0) 00:06:00.940 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.940 [-f for fill workload, use this BYTE value (default 255) 00:06:00.940 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.940 [-y verify result if this switch is on] 00:06:00.940 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.940 Can be used to spread operations across a wider range of memory. 00:06:00.940 03:57:15 -- common/autotest_common.sh@641 -- # es=1 00:06:00.940 03:57:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.940 03:57:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:00.940 03:57:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.940 00:06:00.940 real 0m0.034s 00:06:00.940 user 0m0.022s 00:06:00.940 sys 0m0.012s 00:06:00.940 03:57:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.940 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 ************************************ 00:06:00.940 END TEST accel_wrong_workload 00:06:00.940 ************************************ 00:06:00.940 Error: writing output failed: Broken pipe 00:06:00.940 03:57:15 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.940 03:57:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:00.940 03:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.940 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 ************************************ 00:06:00.940 START TEST accel_negative_buffers 00:06:00.940 ************************************ 00:06:00.940 03:57:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.940 03:57:15 -- common/autotest_common.sh@638 -- # local es=0 00:06:00.940 03:57:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:00.940 03:57:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:00.940 03:57:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:00.940 03:57:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:00.940 03:57:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:00.940 03:57:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.940 03:57:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.940 03:57:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.940 03:57:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.940 03:57:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.940 03:57:15 -- accel/accel.sh@41 -- # jq -r . 00:06:00.940 -x option must be non-negative. 00:06:00.940 [2024-04-19 03:57:15.442511] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:00.940 accel_perf options: 00:06:00.940 [-h help message] 00:06:00.940 [-q queue depth per core] 00:06:00.940 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.940 [-T number of threads per core 00:06:00.940 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.940 [-t time in seconds] 00:06:00.940 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.941 [ dif_verify, , dif_generate, dif_generate_copy 00:06:00.941 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.941 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.941 [-S for crc32c workload, use this seed value (default 0) 00:06:00.941 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.941 [-f for fill workload, use this BYTE value (default 255) 00:06:00.941 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.941 [-y verify result if this switch is on] 00:06:00.941 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.941 Can be used to spread operations across a wider range of memory. 00:06:00.941 03:57:15 -- common/autotest_common.sh@641 -- # es=1 00:06:00.941 03:57:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:00.941 03:57:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:00.941 03:57:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:00.941 00:06:00.941 real 0m0.032s 00:06:00.941 user 0m0.018s 00:06:00.941 sys 0m0.013s 00:06:00.941 03:57:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.941 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:00.941 ************************************ 00:06:00.941 END TEST accel_negative_buffers 00:06:00.941 ************************************ 00:06:00.941 Error: writing output failed: Broken pipe 00:06:01.199 03:57:15 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:01.199 03:57:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:01.199 03:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.200 03:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 ************************************ 00:06:01.200 START TEST accel_crc32c 00:06:01.200 ************************************ 00:06:01.200 03:57:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:01.200 03:57:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.200 03:57:15 -- accel/accel.sh@17 -- # local accel_module 00:06:01.200 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.200 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.200 03:57:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:01.200 03:57:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:01.200 03:57:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.200 03:57:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.200 03:57:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.200 03:57:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.200 03:57:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.200 03:57:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.200 03:57:15 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.200 03:57:15 -- accel/accel.sh@41 -- # jq -r . 00:06:01.200 [2024-04-19 03:57:15.645967] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:01.200 [2024-04-19 03:57:15.646021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652020 ] 00:06:01.200 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.459 [2024-04-19 03:57:15.728116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.459 [2024-04-19 03:57:15.821633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val=0x1 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val=crc32c 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val=32 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val=software 00:06:01.459 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.459 03:57:15 -- accel/accel.sh@22 -- # accel_module=software 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.459 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.459 03:57:15 -- accel/accel.sh@20 -- # val=32 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val=32 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val=1 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val=Yes 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:01.460 03:57:15 -- accel/accel.sh@20 -- # val= 00:06:01.460 03:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # IFS=: 00:06:01.460 03:57:15 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:02.838 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.838 03:57:17 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:02.838 03:57:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.838 00:06:02.838 real 0m1.421s 00:06:02.838 user 0m1.291s 00:06:02.838 sys 0m0.142s 00:06:02.838 03:57:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.838 03:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.838 ************************************ 00:06:02.838 END TEST accel_crc32c 00:06:02.838 ************************************ 00:06:02.838 03:57:17 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:02.838 03:57:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:02.838 03:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.838 03:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.838 ************************************ 00:06:02.838 START TEST accel_crc32c_C2 00:06:02.838 ************************************ 00:06:02.838 03:57:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:02.838 03:57:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.838 03:57:17 -- accel/accel.sh@17 -- # local accel_module 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:02.838 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:02.838 03:57:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:02.838 03:57:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:02.838 03:57:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.838 03:57:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.838 03:57:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.838 03:57:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.838 03:57:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.838 03:57:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.838 03:57:17 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.838 03:57:17 -- accel/accel.sh@41 -- # jq -r . 00:06:02.838 [2024-04-19 03:57:17.223841] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:02.838 [2024-04-19 03:57:17.223891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652305 ] 00:06:02.838 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.838 [2024-04-19 03:57:17.304176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.097 [2024-04-19 03:57:17.393074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.097 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.097 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.097 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.097 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.097 03:57:17 -- accel/accel.sh@20 -- # val=0x1 00:06:03.097 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.097 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.097 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.097 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.097 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.097 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=crc32c 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=0 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=software 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=32 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=32 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=1 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val=Yes 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:03.098 03:57:17 -- accel/accel.sh@20 -- # val= 00:06:03.098 03:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # IFS=: 00:06:03.098 03:57:17 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@20 -- # val= 00:06:04.473 03:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.473 03:57:18 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:04.473 03:57:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.473 00:06:04.473 real 0m1.413s 00:06:04.473 user 0m1.292s 00:06:04.473 sys 0m0.134s 00:06:04.473 03:57:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.473 03:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.473 ************************************ 00:06:04.473 END TEST accel_crc32c_C2 00:06:04.473 ************************************ 00:06:04.473 03:57:18 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:04.473 03:57:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.473 03:57:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.473 03:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:04.473 ************************************ 00:06:04.473 START TEST accel_copy 00:06:04.473 ************************************ 00:06:04.473 03:57:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:04.473 03:57:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.473 03:57:18 -- accel/accel.sh@17 -- # local accel_module 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # IFS=: 00:06:04.473 03:57:18 -- accel/accel.sh@19 -- # read -r var val 00:06:04.473 03:57:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:04.473 03:57:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:04.473 03:57:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.473 03:57:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.473 03:57:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.473 03:57:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.473 03:57:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.473 03:57:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.473 03:57:18 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.473 03:57:18 -- accel/accel.sh@41 -- # jq -r . 00:06:04.473 [2024-04-19 03:57:18.792362] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:04.473 [2024-04-19 03:57:18.792413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652598 ] 00:06:04.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.473 [2024-04-19 03:57:18.874333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.473 [2024-04-19 03:57:18.962392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=0x1 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=copy 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=software 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=32 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=32 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=1 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val=Yes 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:04.731 03:57:19 -- accel/accel.sh@20 -- # val= 00:06:04.731 03:57:19 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # IFS=: 00:06:04.731 03:57:19 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:05.677 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.677 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.677 03:57:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.677 03:57:20 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:05.677 03:57:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.677 00:06:05.677 real 0m1.414s 00:06:05.677 user 0m1.287s 00:06:05.677 sys 0m0.139s 00:06:05.677 03:57:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.677 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 ************************************ 00:06:05.677 END TEST accel_copy 00:06:05.677 ************************************ 00:06:05.936 03:57:20 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.936 03:57:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:05.936 03:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.936 03:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:05.936 ************************************ 00:06:05.936 START TEST accel_fill 00:06:05.936 ************************************ 00:06:05.936 03:57:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.936 03:57:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.936 03:57:20 -- accel/accel.sh@17 -- # local accel_module 00:06:05.936 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:05.936 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:05.936 03:57:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.936 03:57:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.936 03:57:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.936 03:57:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.936 03:57:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.936 03:57:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.936 03:57:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.936 03:57:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.936 03:57:20 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.936 03:57:20 -- accel/accel.sh@41 -- # jq -r . 00:06:05.936 [2024-04-19 03:57:20.388514] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:05.936 [2024-04-19 03:57:20.388573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652934 ] 00:06:05.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.196 [2024-04-19 03:57:20.473407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.196 [2024-04-19 03:57:20.563796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=0x1 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=fill 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=0x80 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=software 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=64 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=64 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=1 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val=Yes 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:06.196 03:57:20 -- accel/accel.sh@20 -- # val= 00:06:06.196 03:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # IFS=: 00:06:06.196 03:57:20 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@20 -- # val= 00:06:07.574 03:57:21 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.574 03:57:21 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:07.574 03:57:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.574 00:06:07.574 real 0m1.422s 00:06:07.574 user 0m1.292s 00:06:07.574 sys 0m0.143s 00:06:07.574 03:57:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.574 03:57:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.574 ************************************ 00:06:07.574 END TEST accel_fill 00:06:07.574 ************************************ 00:06:07.574 03:57:21 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:07.574 03:57:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:07.574 03:57:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.574 03:57:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.574 ************************************ 00:06:07.574 START TEST accel_copy_crc32c 00:06:07.574 ************************************ 00:06:07.574 03:57:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:07.574 03:57:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.574 03:57:21 -- accel/accel.sh@17 -- # local accel_module 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # IFS=: 00:06:07.574 03:57:21 -- accel/accel.sh@19 -- # read -r var val 00:06:07.574 03:57:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:07.574 03:57:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:07.574 03:57:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.574 03:57:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.574 03:57:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.574 03:57:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.574 03:57:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.574 03:57:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.574 03:57:21 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.574 03:57:21 -- accel/accel.sh@41 -- # jq -r . 00:06:07.574 [2024-04-19 03:57:21.980265] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:07.574 [2024-04-19 03:57:21.980341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653302 ] 00:06:07.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.574 [2024-04-19 03:57:22.064428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.910 [2024-04-19 03:57:22.156341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=0x1 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=0 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=software 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=32 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=32 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=1 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val=Yes 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:07.911 03:57:22 -- accel/accel.sh@20 -- # val= 00:06:07.911 03:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # IFS=: 00:06:07.911 03:57:22 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:08.847 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:08.847 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:08.847 03:57:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.847 03:57:23 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:08.847 03:57:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.847 00:06:08.847 real 0m1.422s 00:06:08.848 user 0m1.290s 00:06:08.848 sys 0m0.145s 00:06:08.848 03:57:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.848 03:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 ************************************ 00:06:09.106 END TEST accel_copy_crc32c 00:06:09.106 ************************************ 00:06:09.106 03:57:23 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.106 03:57:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:09.106 03:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.106 03:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.106 ************************************ 00:06:09.106 START TEST accel_copy_crc32c_C2 00:06:09.106 ************************************ 00:06:09.106 03:57:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.106 03:57:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.106 03:57:23 -- accel/accel.sh@17 -- # local accel_module 00:06:09.106 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.106 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.106 03:57:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:09.106 03:57:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:09.106 03:57:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.106 03:57:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.106 03:57:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.106 03:57:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.106 03:57:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.106 03:57:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.106 03:57:23 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.106 03:57:23 -- accel/accel.sh@41 -- # jq -r . 00:06:09.106 [2024-04-19 03:57:23.556894] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:09.107 [2024-04-19 03:57:23.556953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653689 ] 00:06:09.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.366 [2024-04-19 03:57:23.635473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.366 [2024-04-19 03:57:23.721583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=0x1 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=0 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=software 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=32 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=32 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=1 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val=Yes 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:09.366 03:57:23 -- accel/accel.sh@20 -- # val= 00:06:09.366 03:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # IFS=: 00:06:09.366 03:57:23 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@20 -- # val= 00:06:10.746 03:57:24 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:24 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.746 03:57:24 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.746 03:57:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.746 00:06:10.746 real 0m1.408s 00:06:10.746 user 0m1.287s 00:06:10.746 sys 0m0.134s 00:06:10.746 03:57:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.746 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.746 ************************************ 00:06:10.746 END TEST accel_copy_crc32c_C2 00:06:10.746 ************************************ 00:06:10.746 03:57:24 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:10.746 03:57:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:10.746 03:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.746 03:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:10.746 ************************************ 00:06:10.746 START TEST accel_dualcast 00:06:10.746 ************************************ 00:06:10.746 03:57:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:10.746 03:57:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.746 03:57:25 -- accel/accel.sh@17 -- # local accel_module 00:06:10.746 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:10.746 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:10.746 03:57:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:10.746 03:57:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:10.746 03:57:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.746 03:57:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.746 03:57:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.746 03:57:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.746 03:57:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.746 03:57:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.746 03:57:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.746 03:57:25 -- accel/accel.sh@41 -- # jq -r . 00:06:10.746 [2024-04-19 03:57:25.130140] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:10.746 [2024-04-19 03:57:25.130206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654000 ] 00:06:10.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.746 [2024-04-19 03:57:25.210275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.006 [2024-04-19 03:57:25.297396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=0x1 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=dualcast 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=software 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=32 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=32 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=1 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val=Yes 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:11.006 03:57:25 -- accel/accel.sh@20 -- # val= 00:06:11.006 03:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # IFS=: 00:06:11.006 03:57:25 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.383 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.383 03:57:26 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:12.383 03:57:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.383 00:06:12.383 real 0m1.412s 00:06:12.383 user 0m1.290s 00:06:12.383 sys 0m0.134s 00:06:12.383 03:57:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.383 03:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.383 ************************************ 00:06:12.383 END TEST accel_dualcast 00:06:12.383 ************************************ 00:06:12.383 03:57:26 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:12.383 03:57:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.383 03:57:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.383 03:57:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.383 ************************************ 00:06:12.383 START TEST accel_compare 00:06:12.383 ************************************ 00:06:12.383 03:57:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:12.383 03:57:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.383 03:57:26 -- accel/accel.sh@17 -- # local accel_module 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.383 03:57:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:12.383 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.383 03:57:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:12.383 03:57:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.383 03:57:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.383 03:57:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.383 03:57:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.383 03:57:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.383 03:57:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.383 03:57:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.383 03:57:26 -- accel/accel.sh@41 -- # jq -r . 00:06:12.383 [2024-04-19 03:57:26.698483] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:12.383 [2024-04-19 03:57:26.698537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654295 ] 00:06:12.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.383 [2024-04-19 03:57:26.779173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.383 [2024-04-19 03:57:26.868379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.642 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.642 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val=0x1 00:06:12.642 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.642 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.642 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.642 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.642 03:57:26 -- accel/accel.sh@20 -- # val=compare 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val=software 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val=32 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val=32 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val=1 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val=Yes 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:12.643 03:57:26 -- accel/accel.sh@20 -- # val= 00:06:12.643 03:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # IFS=: 00:06:12.643 03:57:26 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:13.581 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 03:57:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.581 03:57:28 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:13.581 03:57:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.581 00:06:13.581 real 0m1.411s 00:06:13.581 user 0m1.287s 00:06:13.581 sys 0m0.136s 00:06:13.581 03:57:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.581 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.581 ************************************ 00:06:13.581 END TEST accel_compare 00:06:13.581 ************************************ 00:06:13.840 03:57:28 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:13.840 03:57:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.840 03:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.840 03:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.840 ************************************ 00:06:13.840 START TEST accel_xor 00:06:13.840 ************************************ 00:06:13.840 03:57:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:13.840 03:57:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.840 03:57:28 -- accel/accel.sh@17 -- # local accel_module 00:06:13.840 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:13.840 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:13.840 03:57:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:13.840 03:57:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:13.840 03:57:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.840 03:57:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.840 03:57:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.841 03:57:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.841 03:57:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.841 03:57:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.841 03:57:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.841 03:57:28 -- accel/accel.sh@41 -- # jq -r . 00:06:13.841 [2024-04-19 03:57:28.290111] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:13.841 [2024-04-19 03:57:28.290168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654580 ] 00:06:13.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.100 [2024-04-19 03:57:28.371626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.100 [2024-04-19 03:57:28.461249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=0x1 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=xor 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=2 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=software 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=32 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=32 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=1 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val=Yes 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:14.100 03:57:28 -- accel/accel.sh@20 -- # val= 00:06:14.100 03:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # IFS=: 00:06:14.100 03:57:28 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@20 -- # val= 00:06:15.479 03:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.479 03:57:29 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:15.479 03:57:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.479 00:06:15.479 real 0m1.417s 00:06:15.479 user 0m1.293s 00:06:15.479 sys 0m0.134s 00:06:15.479 03:57:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.479 03:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.479 ************************************ 00:06:15.479 END TEST accel_xor 00:06:15.479 ************************************ 00:06:15.479 03:57:29 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:15.479 03:57:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:15.479 03:57:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.479 03:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.479 ************************************ 00:06:15.479 START TEST accel_xor 00:06:15.479 ************************************ 00:06:15.479 03:57:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:15.479 03:57:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.479 03:57:29 -- accel/accel.sh@17 -- # local accel_module 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # IFS=: 00:06:15.479 03:57:29 -- accel/accel.sh@19 -- # read -r var val 00:06:15.479 03:57:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:15.479 03:57:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:15.479 03:57:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.479 03:57:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.479 03:57:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.479 03:57:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.479 03:57:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.479 03:57:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.479 03:57:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.479 03:57:29 -- accel/accel.sh@41 -- # jq -r . 00:06:15.479 [2024-04-19 03:57:29.873448] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:15.479 [2024-04-19 03:57:29.873524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654872 ] 00:06:15.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.479 [2024-04-19 03:57:29.956882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.739 [2024-04-19 03:57:30.058177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=0x1 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=xor 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=3 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=software 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=32 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=32 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=1 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val=Yes 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:15.739 03:57:30 -- accel/accel.sh@20 -- # val= 00:06:15.739 03:57:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # IFS=: 00:06:15.739 03:57:30 -- accel/accel.sh@19 -- # read -r var val 00:06:17.116 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.116 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.116 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.116 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.116 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.116 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.116 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.117 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.117 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.117 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.117 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.117 03:57:31 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.117 03:57:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.117 00:06:17.117 real 0m1.432s 00:06:17.117 user 0m1.299s 00:06:17.117 sys 0m0.146s 00:06:17.117 03:57:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.117 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.117 ************************************ 00:06:17.117 END TEST accel_xor 00:06:17.117 ************************************ 00:06:17.117 03:57:31 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:17.117 03:57:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.117 03:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.117 03:57:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.117 ************************************ 00:06:17.117 START TEST accel_dif_verify 00:06:17.117 ************************************ 00:06:17.117 03:57:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:17.117 03:57:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.117 03:57:31 -- accel/accel.sh@17 -- # local accel_module 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.117 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.117 03:57:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:17.117 03:57:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:17.117 03:57:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.117 03:57:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.117 03:57:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.117 03:57:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.117 03:57:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.117 03:57:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.117 03:57:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.117 03:57:31 -- accel/accel.sh@41 -- # jq -r . 00:06:17.117 [2024-04-19 03:57:31.464895] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:17.117 [2024-04-19 03:57:31.464961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655158 ] 00:06:17.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.117 [2024-04-19 03:57:31.548162] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.117 [2024-04-19 03:57:31.638197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val=0x1 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.376 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.376 03:57:31 -- accel/accel.sh@20 -- # val=dif_verify 00:06:17.376 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.376 03:57:31 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val=software 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val=32 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val=32 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val=1 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val=No 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 03:57:31 -- accel/accel.sh@20 -- # val= 00:06:17.377 03:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 03:57:31 -- accel/accel.sh@19 -- # read -r var val 00:06:18.768 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.768 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.768 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.768 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.768 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.768 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:32 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:32 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.769 03:57:32 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:18.769 03:57:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.769 00:06:18.769 real 0m1.420s 00:06:18.769 user 0m1.293s 00:06:18.769 sys 0m0.141s 00:06:18.769 03:57:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.769 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.769 ************************************ 00:06:18.769 END TEST accel_dif_verify 00:06:18.769 ************************************ 00:06:18.769 03:57:32 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:18.769 03:57:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:18.769 03:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.769 03:57:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.769 ************************************ 00:06:18.769 START TEST accel_dif_generate 00:06:18.769 ************************************ 00:06:18.769 03:57:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:18.769 03:57:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.769 03:57:33 -- accel/accel.sh@17 -- # local accel_module 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:18.769 03:57:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:18.769 03:57:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.769 03:57:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.769 03:57:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.769 03:57:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.769 03:57:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.769 03:57:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.769 03:57:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.769 03:57:33 -- accel/accel.sh@41 -- # jq -r . 00:06:18.769 [2024-04-19 03:57:33.061749] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:18.769 [2024-04-19 03:57:33.061829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655450 ] 00:06:18.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.769 [2024-04-19 03:57:33.144673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.769 [2024-04-19 03:57:33.235968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=0x1 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=dif_generate 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=software 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=32 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=32 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val=1 00:06:18.769 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:18.769 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:18.769 03:57:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.031 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:19.031 03:57:33 -- accel/accel.sh@20 -- # val=No 00:06:19.031 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:19.031 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:19.031 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:19.031 03:57:33 -- accel/accel.sh@20 -- # val= 00:06:19.031 03:57:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # IFS=: 00:06:19.031 03:57:33 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:19.969 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:19.969 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:19.969 03:57:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.969 03:57:34 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:19.969 03:57:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.969 00:06:19.969 real 0m1.421s 00:06:19.969 user 0m1.298s 00:06:19.969 sys 0m0.135s 00:06:19.969 03:57:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.969 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:19.969 ************************************ 00:06:19.969 END TEST accel_dif_generate 00:06:19.969 ************************************ 00:06:19.969 03:57:34 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:19.969 03:57:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:19.969 03:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.969 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.228 ************************************ 00:06:20.228 START TEST accel_dif_generate_copy 00:06:20.228 ************************************ 00:06:20.228 03:57:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:20.228 03:57:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.228 03:57:34 -- accel/accel.sh@17 -- # local accel_module 00:06:20.228 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.228 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.228 03:57:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:20.228 03:57:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:20.228 03:57:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.228 03:57:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.228 03:57:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.228 03:57:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.228 03:57:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.228 03:57:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.228 03:57:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.228 03:57:34 -- accel/accel.sh@41 -- # jq -r . 00:06:20.228 [2024-04-19 03:57:34.643069] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:20.228 [2024-04-19 03:57:34.643126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655751 ] 00:06:20.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.228 [2024-04-19 03:57:34.723929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.487 [2024-04-19 03:57:34.811459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=0x1 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=software 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=32 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=32 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=1 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val=No 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:20.487 03:57:34 -- accel/accel.sh@20 -- # val= 00:06:20.487 03:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # IFS=: 00:06:20.487 03:57:34 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:21.865 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.865 03:57:36 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:21.865 03:57:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.865 00:06:21.865 real 0m1.411s 00:06:21.865 user 0m1.283s 00:06:21.865 sys 0m0.140s 00:06:21.865 03:57:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.865 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 ************************************ 00:06:21.865 END TEST accel_dif_generate_copy 00:06:21.865 ************************************ 00:06:21.865 03:57:36 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:21.865 03:57:36 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.865 03:57:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:21.865 03:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.865 03:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 ************************************ 00:06:21.865 START TEST accel_comp 00:06:21.865 ************************************ 00:06:21.865 03:57:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.865 03:57:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.865 03:57:36 -- accel/accel.sh@17 -- # local accel_module 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:21.865 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:21.865 03:57:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.865 03:57:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.865 03:57:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.865 03:57:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.865 03:57:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.865 03:57:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.865 03:57:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.865 03:57:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.865 03:57:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.865 03:57:36 -- accel/accel.sh@41 -- # jq -r . 00:06:21.865 [2024-04-19 03:57:36.218437] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:21.865 [2024-04-19 03:57:36.218498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656113 ] 00:06:21.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.865 [2024-04-19 03:57:36.299706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.865 [2024-04-19 03:57:36.386069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val=0x1 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val=compress 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val=software 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.124 03:57:36 -- accel/accel.sh@20 -- # val=32 00:06:22.124 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.124 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val=32 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val=1 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val=No 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:22.125 03:57:36 -- accel/accel.sh@20 -- # val= 00:06:22.125 03:57:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # IFS=: 00:06:22.125 03:57:36 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.503 03:57:37 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:23.503 03:57:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.503 00:06:23.503 real 0m1.419s 00:06:23.503 user 0m1.297s 00:06:23.503 sys 0m0.134s 00:06:23.503 03:57:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.503 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.503 ************************************ 00:06:23.503 END TEST accel_comp 00:06:23.503 ************************************ 00:06:23.503 03:57:37 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.503 03:57:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:23.503 03:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.503 03:57:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.503 ************************************ 00:06:23.503 START TEST accel_decomp 00:06:23.503 ************************************ 00:06:23.503 03:57:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.503 03:57:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.503 03:57:37 -- accel/accel.sh@17 -- # local accel_module 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:37 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.503 03:57:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:23.503 03:57:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.503 03:57:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.503 03:57:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.503 03:57:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.503 03:57:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.503 03:57:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.503 03:57:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.503 03:57:37 -- accel/accel.sh@41 -- # jq -r . 00:06:23.503 [2024-04-19 03:57:37.797874] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:23.503 [2024-04-19 03:57:37.797932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656485 ] 00:06:23.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.503 [2024-04-19 03:57:37.876250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.503 [2024-04-19 03:57:37.963099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=0x1 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=decompress 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=software 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=32 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.503 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.503 03:57:38 -- accel/accel.sh@20 -- # val=32 00:06:23.503 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.504 03:57:38 -- accel/accel.sh@20 -- # val=1 00:06:23.504 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.504 03:57:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.504 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.504 03:57:38 -- accel/accel.sh@20 -- # val=Yes 00:06:23.504 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.504 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.504 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:23.504 03:57:38 -- accel/accel.sh@20 -- # val= 00:06:23.504 03:57:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # IFS=: 00:06:23.504 03:57:38 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:24.882 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.882 03:57:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.882 03:57:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.882 00:06:24.882 real 0m1.412s 00:06:24.882 user 0m1.287s 00:06:24.882 sys 0m0.139s 00:06:24.882 03:57:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.882 03:57:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.882 ************************************ 00:06:24.882 END TEST accel_decomp 00:06:24.882 ************************************ 00:06:24.882 03:57:39 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.882 03:57:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:24.882 03:57:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.882 03:57:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.882 ************************************ 00:06:24.882 START TEST accel_decmop_full 00:06:24.882 ************************************ 00:06:24.882 03:57:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.882 03:57:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.882 03:57:39 -- accel/accel.sh@17 -- # local accel_module 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:24.882 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:24.882 03:57:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.882 03:57:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:24.882 03:57:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.882 03:57:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.882 03:57:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.882 03:57:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.882 03:57:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.882 03:57:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.882 03:57:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.882 03:57:39 -- accel/accel.sh@41 -- # jq -r . 00:06:24.882 [2024-04-19 03:57:39.374649] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:24.882 [2024-04-19 03:57:39.374699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656855 ] 00:06:24.882 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.141 [2024-04-19 03:57:39.454525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.141 [2024-04-19 03:57:39.541758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=0x1 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=decompress 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=software 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=32 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=32 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=1 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val=Yes 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.141 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.141 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:25.141 03:57:39 -- accel/accel.sh@20 -- # val= 00:06:25.142 03:57:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.142 03:57:39 -- accel/accel.sh@19 -- # IFS=: 00:06:25.142 03:57:39 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@20 -- # val= 00:06:26.519 03:57:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.519 03:57:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.519 03:57:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.519 00:06:26.519 real 0m1.424s 00:06:26.519 user 0m1.303s 00:06:26.519 sys 0m0.134s 00:06:26.519 03:57:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.519 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.519 ************************************ 00:06:26.519 END TEST accel_decmop_full 00:06:26.519 ************************************ 00:06:26.519 03:57:40 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.519 03:57:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:26.519 03:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.519 03:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:26.519 ************************************ 00:06:26.519 START TEST accel_decomp_mcore 00:06:26.519 ************************************ 00:06:26.519 03:57:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.519 03:57:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.519 03:57:40 -- accel/accel.sh@17 -- # local accel_module 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # IFS=: 00:06:26.519 03:57:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.519 03:57:40 -- accel/accel.sh@19 -- # read -r var val 00:06:26.519 03:57:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.519 03:57:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.519 03:57:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.519 03:57:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.519 03:57:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.519 03:57:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.519 03:57:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.519 03:57:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.519 03:57:40 -- accel/accel.sh@41 -- # jq -r . 00:06:26.519 [2024-04-19 03:57:40.962332] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:26.519 [2024-04-19 03:57:40.962416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657146 ] 00:06:26.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.780 [2024-04-19 03:57:41.047215] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.780 [2024-04-19 03:57:41.143860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.781 [2024-04-19 03:57:41.143964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.781 [2024-04-19 03:57:41.144095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.781 [2024-04-19 03:57:41.144097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=0xf 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=decompress 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=software 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=32 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=32 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=1 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val=Yes 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 03:57:41 -- accel/accel.sh@20 -- # val= 00:06:26.781 03:57:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 03:57:41 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.159 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.159 03:57:42 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.159 03:57:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.159 00:06:28.159 real 0m1.441s 00:06:28.159 user 0m4.663s 00:06:28.159 sys 0m0.148s 00:06:28.159 03:57:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.159 03:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.159 ************************************ 00:06:28.159 END TEST accel_decomp_mcore 00:06:28.159 ************************************ 00:06:28.159 03:57:42 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.159 03:57:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:28.159 03:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.159 03:57:42 -- common/autotest_common.sh@10 -- # set +x 00:06:28.159 ************************************ 00:06:28.159 START TEST accel_decomp_full_mcore 00:06:28.159 ************************************ 00:06:28.159 03:57:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.159 03:57:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.159 03:57:42 -- accel/accel.sh@17 -- # local accel_module 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.159 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.159 03:57:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.159 03:57:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.159 03:57:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.159 03:57:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.159 03:57:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.159 03:57:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.159 03:57:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.159 03:57:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.159 03:57:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.159 03:57:42 -- accel/accel.sh@41 -- # jq -r . 00:06:28.159 [2024-04-19 03:57:42.560325] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:28.160 [2024-04-19 03:57:42.560394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657436 ] 00:06:28.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.160 [2024-04-19 03:57:42.643857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.419 [2024-04-19 03:57:42.736267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.419 [2024-04-19 03:57:42.736375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.419 [2024-04-19 03:57:42.736501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.419 [2024-04-19 03:57:42.736502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=0xf 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=decompress 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=software 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=32 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=32 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=1 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val=Yes 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:28.419 03:57:42 -- accel/accel.sh@20 -- # val= 00:06:28.419 03:57:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # IFS=: 00:06:28.419 03:57:42 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@20 -- # val= 00:06:29.797 03:57:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:43 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.797 03:57:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.797 03:57:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.797 00:06:29.797 real 0m1.447s 00:06:29.797 user 0m4.702s 00:06:29.797 sys 0m0.155s 00:06:29.797 03:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.797 03:57:43 -- common/autotest_common.sh@10 -- # set +x 00:06:29.797 ************************************ 00:06:29.797 END TEST accel_decomp_full_mcore 00:06:29.797 ************************************ 00:06:29.797 03:57:44 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.797 03:57:44 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:29.797 03:57:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.797 03:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.797 ************************************ 00:06:29.797 START TEST accel_decomp_mthread 00:06:29.797 ************************************ 00:06:29.797 03:57:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.797 03:57:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.797 03:57:44 -- accel/accel.sh@17 -- # local accel_module 00:06:29.797 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:29.797 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:29.797 03:57:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.797 03:57:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.797 03:57:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.797 03:57:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.797 03:57:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.797 03:57:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.797 03:57:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.797 03:57:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.797 03:57:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.797 03:57:44 -- accel/accel.sh@41 -- # jq -r . 00:06:29.797 [2024-04-19 03:57:44.175282] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:29.797 [2024-04-19 03:57:44.175355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657731 ] 00:06:29.797 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.797 [2024-04-19 03:57:44.259741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.057 [2024-04-19 03:57:44.353490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=0x1 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=decompress 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=software 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=32 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=32 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=2 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val=Yes 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:30.057 03:57:44 -- accel/accel.sh@20 -- # val= 00:06:30.057 03:57:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # IFS=: 00:06:30.057 03:57:44 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.493 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.493 03:57:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.493 03:57:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.493 03:57:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.493 00:06:31.493 real 0m1.432s 00:06:31.493 user 0m1.301s 00:06:31.493 sys 0m0.143s 00:06:31.493 03:57:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.493 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.493 ************************************ 00:06:31.493 END TEST accel_decomp_mthread 00:06:31.493 ************************************ 00:06:31.493 03:57:45 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.493 03:57:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:31.493 03:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.493 03:57:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.493 ************************************ 00:06:31.493 START TEST accel_deomp_full_mthread 00:06:31.493 ************************************ 00:06:31.493 03:57:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.493 03:57:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.493 03:57:45 -- accel/accel.sh@17 -- # local accel_module 00:06:31.493 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.494 03:57:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.494 03:57:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.494 03:57:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.494 03:57:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.494 03:57:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.494 03:57:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.494 03:57:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.494 03:57:45 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.494 03:57:45 -- accel/accel.sh@41 -- # jq -r . 00:06:31.494 [2024-04-19 03:57:45.758878] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:31.494 [2024-04-19 03:57:45.758944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658021 ] 00:06:31.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.494 [2024-04-19 03:57:45.841433] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.494 [2024-04-19 03:57:45.929438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=0x1 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=decompress 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=software 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=32 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=32 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=2 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val=Yes 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:31.494 03:57:45 -- accel/accel.sh@20 -- # val= 00:06:31.494 03:57:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # IFS=: 00:06:31.494 03:57:45 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@20 -- # val= 00:06:32.871 03:57:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # IFS=: 00:06:32.871 03:57:47 -- accel/accel.sh@19 -- # read -r var val 00:06:32.871 03:57:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.871 03:57:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.871 03:57:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.872 00:06:32.872 real 0m1.453s 00:06:32.872 user 0m1.329s 00:06:32.872 sys 0m0.137s 00:06:32.872 03:57:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.872 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:32.872 ************************************ 00:06:32.872 END TEST accel_deomp_full_mthread 00:06:32.872 ************************************ 00:06:32.872 03:57:47 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:32.872 03:57:47 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.872 03:57:47 -- accel/accel.sh@137 -- # build_accel_config 00:06:32.872 03:57:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:32.872 03:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.872 03:57:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.872 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:32.872 03:57:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.872 03:57:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.872 03:57:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.872 03:57:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.872 03:57:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.872 03:57:47 -- accel/accel.sh@41 -- # jq -r . 00:06:32.872 ************************************ 00:06:32.872 START TEST accel_dif_functional_tests 00:06:32.872 ************************************ 00:06:32.872 03:57:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.872 [2024-04-19 03:57:47.395351] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:32.872 [2024-04-19 03:57:47.395399] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658312 ] 00:06:33.130 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.130 [2024-04-19 03:57:47.475700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.130 [2024-04-19 03:57:47.564229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.130 [2024-04-19 03:57:47.564332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.130 [2024-04-19 03:57:47.564333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.130 00:06:33.130 00:06:33.130 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.130 http://cunit.sourceforge.net/ 00:06:33.130 00:06:33.130 00:06:33.130 Suite: accel_dif 00:06:33.130 Test: verify: DIF generated, GUARD check ...passed 00:06:33.130 Test: verify: DIF generated, APPTAG check ...passed 00:06:33.130 Test: verify: DIF generated, REFTAG check ...passed 00:06:33.130 Test: verify: DIF not generated, GUARD check ...[2024-04-19 03:57:47.639230] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.130 [2024-04-19 03:57:47.639284] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:33.130 passed 00:06:33.130 Test: verify: DIF not generated, APPTAG check ...[2024-04-19 03:57:47.639321] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.130 [2024-04-19 03:57:47.639340] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:33.130 passed 00:06:33.130 Test: verify: DIF not generated, REFTAG check ...[2024-04-19 03:57:47.639370] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.130 [2024-04-19 03:57:47.639391] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:33.130 passed 00:06:33.130 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:33.130 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-19 03:57:47.639449] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:33.130 passed 00:06:33.130 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:33.130 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:33.130 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:33.130 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-19 03:57:47.639592] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:33.130 passed 00:06:33.130 Test: generate copy: DIF generated, GUARD check ...passed 00:06:33.130 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:33.130 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:33.130 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:33.130 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:33.130 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:33.130 Test: generate copy: iovecs-len validate ...[2024-04-19 03:57:47.639826] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:33.130 passed 00:06:33.130 Test: generate copy: buffer alignment validate ...passed 00:06:33.130 00:06:33.130 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.130 suites 1 1 n/a 0 0 00:06:33.130 tests 20 20 20 0 0 00:06:33.130 asserts 204 204 204 0 n/a 00:06:33.131 00:06:33.131 Elapsed time = 0.002 seconds 00:06:33.389 00:06:33.389 real 0m0.495s 00:06:33.389 user 0m0.671s 00:06:33.389 sys 0m0.159s 00:06:33.389 03:57:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.389 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 ************************************ 00:06:33.389 END TEST accel_dif_functional_tests 00:06:33.389 ************************************ 00:06:33.389 00:06:33.389 real 0m35.079s 00:06:33.389 user 0m36.760s 00:06:33.389 sys 0m5.995s 00:06:33.389 03:57:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.389 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 ************************************ 00:06:33.389 END TEST accel 00:06:33.389 ************************************ 00:06:33.389 03:57:47 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.389 03:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.389 03:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.389 03:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.648 ************************************ 00:06:33.648 START TEST accel_rpc 00:06:33.648 ************************************ 00:06:33.648 03:57:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:33.648 * Looking for test storage... 00:06:33.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:33.648 03:57:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.648 03:57:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3658638 00:06:33.648 03:57:48 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:33.648 03:57:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 3658638 00:06:33.648 03:57:48 -- common/autotest_common.sh@817 -- # '[' -z 3658638 ']' 00:06:33.648 03:57:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.648 03:57:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.648 03:57:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.648 03:57:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.648 03:57:48 -- common/autotest_common.sh@10 -- # set +x 00:06:33.907 [2024-04-19 03:57:48.183172] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:33.907 [2024-04-19 03:57:48.183226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658638 ] 00:06:33.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.907 [2024-04-19 03:57:48.265095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.907 [2024-04-19 03:57:48.354204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.842 03:57:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.842 03:57:49 -- common/autotest_common.sh@850 -- # return 0 00:06:34.842 03:57:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:34.842 03:57:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:34.842 03:57:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:34.843 03:57:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:34.843 03:57:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:34.843 03:57:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.843 03:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.843 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.843 ************************************ 00:06:34.843 START TEST accel_assign_opcode 00:06:34.843 ************************************ 00:06:34.843 03:57:49 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:34.843 03:57:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:34.843 03:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.843 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.843 [2024-04-19 03:57:49.244870] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:34.843 03:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.843 03:57:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:34.843 03:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.843 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:34.843 [2024-04-19 03:57:49.252888] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:34.843 03:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.843 03:57:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:34.843 03:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.843 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 03:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.101 03:57:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:35.101 03:57:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:35.101 03:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.101 03:57:49 -- accel/accel_rpc.sh@42 -- # grep software 00:06:35.101 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.102 03:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.102 software 00:06:35.102 00:06:35.102 real 0m0.256s 00:06:35.102 user 0m0.048s 00:06:35.102 sys 0m0.009s 00:06:35.102 03:57:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.102 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.102 ************************************ 00:06:35.102 END TEST accel_assign_opcode 00:06:35.102 ************************************ 00:06:35.102 03:57:49 -- accel/accel_rpc.sh@55 -- # killprocess 3658638 00:06:35.102 03:57:49 -- common/autotest_common.sh@936 -- # '[' -z 3658638 ']' 00:06:35.102 03:57:49 -- common/autotest_common.sh@940 -- # kill -0 3658638 00:06:35.102 03:57:49 -- common/autotest_common.sh@941 -- # uname 00:06:35.102 03:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.102 03:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3658638 00:06:35.102 03:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.102 03:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.102 03:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3658638' 00:06:35.102 killing process with pid 3658638 00:06:35.102 03:57:49 -- common/autotest_common.sh@955 -- # kill 3658638 00:06:35.102 03:57:49 -- common/autotest_common.sh@960 -- # wait 3658638 00:06:35.669 00:06:35.669 real 0m1.897s 00:06:35.669 user 0m2.098s 00:06:35.669 sys 0m0.512s 00:06:35.669 03:57:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.669 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.669 ************************************ 00:06:35.669 END TEST accel_rpc 00:06:35.669 ************************************ 00:06:35.669 03:57:49 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.669 03:57:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.669 03:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.669 03:57:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.669 ************************************ 00:06:35.669 START TEST app_cmdline 00:06:35.669 ************************************ 00:06:35.669 03:57:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.927 * Looking for test storage... 00:06:35.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.927 03:57:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.927 03:57:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3658996 00:06:35.927 03:57:50 -- app/cmdline.sh@18 -- # waitforlisten 3658996 00:06:35.927 03:57:50 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.927 03:57:50 -- common/autotest_common.sh@817 -- # '[' -z 3658996 ']' 00:06:35.927 03:57:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.927 03:57:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.927 03:57:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.927 03:57:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.927 03:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.927 [2024-04-19 03:57:50.262845] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:06:35.927 [2024-04-19 03:57:50.262900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658996 ] 00:06:35.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.927 [2024-04-19 03:57:50.345231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.927 [2024-04-19 03:57:50.435070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.861 03:57:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.861 03:57:51 -- common/autotest_common.sh@850 -- # return 0 00:06:36.861 03:57:51 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.119 { 00:06:37.120 "version": "SPDK v24.05-pre git sha1 77a84e60e", 00:06:37.120 "fields": { 00:06:37.120 "major": 24, 00:06:37.120 "minor": 5, 00:06:37.120 "patch": 0, 00:06:37.120 "suffix": "-pre", 00:06:37.120 "commit": "77a84e60e" 00:06:37.120 } 00:06:37.120 } 00:06:37.120 03:57:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.120 03:57:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.120 03:57:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.120 03:57:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.120 03:57:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.120 03:57:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.120 03:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.120 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.120 03:57:51 -- app/cmdline.sh@26 -- # sort 00:06:37.120 03:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.120 03:57:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.120 03:57:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.120 03:57:51 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.120 03:57:51 -- common/autotest_common.sh@638 -- # local es=0 00:06:37.120 03:57:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.120 03:57:51 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.120 03:57:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:37.120 03:57:51 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.120 03:57:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:37.120 03:57:51 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.120 03:57:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:37.120 03:57:51 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.120 03:57:51 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.120 03:57:51 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.120 request: 00:06:37.120 { 00:06:37.120 "method": "env_dpdk_get_mem_stats", 00:06:37.120 "req_id": 1 00:06:37.120 } 00:06:37.120 Got JSON-RPC error response 00:06:37.120 response: 00:06:37.120 { 00:06:37.120 "code": -32601, 00:06:37.120 "message": "Method not found" 00:06:37.120 } 00:06:37.120 03:57:51 -- common/autotest_common.sh@641 -- # es=1 00:06:37.120 03:57:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:37.120 03:57:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:37.120 03:57:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:37.120 03:57:51 -- app/cmdline.sh@1 -- # killprocess 3658996 00:06:37.120 03:57:51 -- common/autotest_common.sh@936 -- # '[' -z 3658996 ']' 00:06:37.120 03:57:51 -- common/autotest_common.sh@940 -- # kill -0 3658996 00:06:37.120 03:57:51 -- common/autotest_common.sh@941 -- # uname 00:06:37.120 03:57:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.120 03:57:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3658996 00:06:37.378 03:57:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.378 03:57:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.378 03:57:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3658996' 00:06:37.378 killing process with pid 3658996 00:06:37.378 03:57:51 -- common/autotest_common.sh@955 -- # kill 3658996 00:06:37.378 03:57:51 -- common/autotest_common.sh@960 -- # wait 3658996 00:06:37.637 00:06:37.637 real 0m1.930s 00:06:37.637 user 0m2.403s 00:06:37.637 sys 0m0.479s 00:06:37.637 03:57:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.637 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:37.637 ************************************ 00:06:37.637 END TEST app_cmdline 00:06:37.637 ************************************ 00:06:37.637 03:57:52 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.637 03:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.637 03:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.637 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 ************************************ 00:06:37.896 START TEST version 00:06:37.896 ************************************ 00:06:37.896 03:57:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.896 * Looking for test storage... 00:06:37.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:37.896 03:57:52 -- app/version.sh@17 -- # get_header_version major 00:06:37.896 03:57:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.896 03:57:52 -- app/version.sh@14 -- # cut -f2 00:06:37.896 03:57:52 -- app/version.sh@14 -- # tr -d '"' 00:06:37.896 03:57:52 -- app/version.sh@17 -- # major=24 00:06:37.896 03:57:52 -- app/version.sh@18 -- # get_header_version minor 00:06:37.896 03:57:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.896 03:57:52 -- app/version.sh@14 -- # cut -f2 00:06:37.896 03:57:52 -- app/version.sh@14 -- # tr -d '"' 00:06:37.896 03:57:52 -- app/version.sh@18 -- # minor=5 00:06:37.896 03:57:52 -- app/version.sh@19 -- # get_header_version patch 00:06:37.896 03:57:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.896 03:57:52 -- app/version.sh@14 -- # cut -f2 00:06:37.896 03:57:52 -- app/version.sh@14 -- # tr -d '"' 00:06:37.896 03:57:52 -- app/version.sh@19 -- # patch=0 00:06:37.896 03:57:52 -- app/version.sh@20 -- # get_header_version suffix 00:06:37.896 03:57:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.896 03:57:52 -- app/version.sh@14 -- # cut -f2 00:06:37.896 03:57:52 -- app/version.sh@14 -- # tr -d '"' 00:06:37.896 03:57:52 -- app/version.sh@20 -- # suffix=-pre 00:06:37.896 03:57:52 -- app/version.sh@22 -- # version=24.5 00:06:37.896 03:57:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.896 03:57:52 -- app/version.sh@28 -- # version=24.5rc0 00:06:37.896 03:57:52 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.896 03:57:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.896 03:57:52 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:37.896 03:57:52 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:37.896 00:06:37.896 real 0m0.159s 00:06:37.896 user 0m0.078s 00:06:37.896 sys 0m0.116s 00:06:37.896 03:57:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.896 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 ************************************ 00:06:37.896 END TEST version 00:06:37.896 ************************************ 00:06:37.896 03:57:52 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:37.896 03:57:52 -- spdk/autotest.sh@194 -- # uname -s 00:06:37.897 03:57:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:37.897 03:57:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.897 03:57:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:37.897 03:57:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:37.897 03:57:52 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:37.897 03:57:52 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:37.897 03:57:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:37.897 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.156 03:57:52 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:38.156 03:57:52 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:38.156 03:57:52 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:38.156 03:57:52 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:38.156 03:57:52 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:38.156 03:57:52 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:38.156 03:57:52 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.156 03:57:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:38.156 03:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.156 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.156 ************************************ 00:06:38.156 START TEST nvmf_tcp 00:06:38.156 ************************************ 00:06:38.156 03:57:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.156 * Looking for test storage... 00:06:38.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:38.156 03:57:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.156 03:57:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.156 03:57:52 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.156 03:57:52 -- nvmf/common.sh@7 -- # uname -s 00:06:38.156 03:57:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.156 03:57:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.156 03:57:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.156 03:57:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.156 03:57:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.156 03:57:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.156 03:57:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.156 03:57:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.156 03:57:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.416 03:57:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.416 03:57:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:38.416 03:57:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:38.416 03:57:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.416 03:57:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.416 03:57:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.416 03:57:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.416 03:57:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.416 03:57:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.416 03:57:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.416 03:57:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.416 03:57:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.416 03:57:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.416 03:57:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.416 03:57:52 -- paths/export.sh@5 -- # export PATH 00:06:38.416 03:57:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.416 03:57:52 -- nvmf/common.sh@47 -- # : 0 00:06:38.416 03:57:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.416 03:57:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.416 03:57:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.416 03:57:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.416 03:57:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.416 03:57:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.416 03:57:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.416 03:57:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.416 03:57:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.416 03:57:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:38.416 03:57:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:38.416 03:57:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:38.416 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.416 03:57:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:38.416 03:57:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.416 03:57:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:38.416 03:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.416 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.416 ************************************ 00:06:38.416 START TEST nvmf_example 00:06:38.416 ************************************ 00:06:38.416 03:57:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.416 * Looking for test storage... 00:06:38.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.416 03:57:52 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.416 03:57:52 -- nvmf/common.sh@7 -- # uname -s 00:06:38.416 03:57:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.416 03:57:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.416 03:57:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.416 03:57:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.416 03:57:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.416 03:57:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.416 03:57:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.416 03:57:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.416 03:57:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.416 03:57:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.416 03:57:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:38.416 03:57:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:38.416 03:57:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.416 03:57:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.416 03:57:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.416 03:57:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.416 03:57:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.416 03:57:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.676 03:57:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.676 03:57:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.676 03:57:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.676 03:57:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.676 03:57:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.676 03:57:52 -- paths/export.sh@5 -- # export PATH 00:06:38.676 03:57:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.676 03:57:52 -- nvmf/common.sh@47 -- # : 0 00:06:38.676 03:57:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.676 03:57:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.676 03:57:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.676 03:57:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.676 03:57:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.676 03:57:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.676 03:57:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.676 03:57:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.676 03:57:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:38.676 03:57:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:38.676 03:57:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:38.676 03:57:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:38.676 03:57:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:38.676 03:57:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:38.676 03:57:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:38.676 03:57:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:38.676 03:57:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:38.676 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.676 03:57:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:38.676 03:57:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:38.676 03:57:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.676 03:57:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:38.676 03:57:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:38.676 03:57:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:38.676 03:57:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.676 03:57:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.676 03:57:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.676 03:57:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:38.676 03:57:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:38.676 03:57:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.676 03:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.950 03:57:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:43.950 03:57:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:43.950 03:57:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:43.950 03:57:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:43.950 03:57:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:43.950 03:57:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:43.950 03:57:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:43.950 03:57:58 -- nvmf/common.sh@295 -- # net_devs=() 00:06:43.950 03:57:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:43.950 03:57:58 -- nvmf/common.sh@296 -- # e810=() 00:06:43.950 03:57:58 -- nvmf/common.sh@296 -- # local -ga e810 00:06:43.950 03:57:58 -- nvmf/common.sh@297 -- # x722=() 00:06:43.950 03:57:58 -- nvmf/common.sh@297 -- # local -ga x722 00:06:43.950 03:57:58 -- nvmf/common.sh@298 -- # mlx=() 00:06:43.950 03:57:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:43.951 03:57:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.951 03:57:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:43.951 03:57:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:43.951 03:57:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:43.951 03:57:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.951 03:57:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:43.951 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:43.951 03:57:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.951 03:57:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:43.951 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:43.951 03:57:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:43.951 03:57:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.951 03:57:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.951 03:57:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:43.951 03:57:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.951 03:57:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:43.951 Found net devices under 0000:af:00.0: cvl_0_0 00:06:43.951 03:57:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.951 03:57:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.951 03:57:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.951 03:57:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:43.951 03:57:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.951 03:57:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:43.951 Found net devices under 0000:af:00.1: cvl_0_1 00:06:43.951 03:57:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.951 03:57:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:43.951 03:57:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:43.951 03:57:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:43.951 03:57:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:44.210 03:57:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:44.210 03:57:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.210 03:57:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.210 03:57:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.210 03:57:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:44.210 03:57:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.210 03:57:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.210 03:57:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:44.210 03:57:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.210 03:57:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.210 03:57:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:44.210 03:57:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:44.210 03:57:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.210 03:57:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.210 03:57:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.210 03:57:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.210 03:57:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:44.210 03:57:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.210 03:57:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.210 03:57:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.210 03:57:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:44.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:06:44.469 00:06:44.469 --- 10.0.0.2 ping statistics --- 00:06:44.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.469 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:06:44.469 03:57:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:44.469 00:06:44.469 --- 10.0.0.1 ping statistics --- 00:06:44.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.469 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:44.469 03:57:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.469 03:57:58 -- nvmf/common.sh@411 -- # return 0 00:06:44.469 03:57:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:44.469 03:57:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.469 03:57:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:44.469 03:57:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:44.469 03:57:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.469 03:57:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:44.469 03:57:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:44.469 03:57:58 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:44.469 03:57:58 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:44.469 03:57:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.469 03:57:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.469 03:57:58 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:44.469 03:57:58 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:44.469 03:57:58 -- target/nvmf_example.sh@34 -- # nvmfpid=3662836 00:06:44.469 03:57:58 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:44.469 03:57:58 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:44.469 03:57:58 -- target/nvmf_example.sh@36 -- # waitforlisten 3662836 00:06:44.469 03:57:58 -- common/autotest_common.sh@817 -- # '[' -z 3662836 ']' 00:06:44.469 03:57:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.469 03:57:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:44.469 03:57:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.469 03:57:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:44.469 03:57:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.406 03:57:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:45.406 03:57:59 -- common/autotest_common.sh@850 -- # return 0 00:06:45.406 03:57:59 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:45.406 03:57:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.406 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 03:57:59 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.406 03:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.406 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 03:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.406 03:57:59 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:45.406 03:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.406 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 03:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.406 03:57:59 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:45.406 03:57:59 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.406 03:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.406 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 03:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.406 03:57:59 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:45.406 03:57:59 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:45.406 03:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.406 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.406 03:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.406 03:57:59 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.407 03:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.407 03:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.407 03:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.407 03:57:59 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:45.407 03:57:59 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.614 Initializing NVMe Controllers 00:06:57.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:57.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:57.614 Initialization complete. Launching workers. 00:06:57.614 ======================================================== 00:06:57.614 Latency(us) 00:06:57.614 Device Information : IOPS MiB/s Average min max 00:06:57.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15784.10 61.66 4055.40 849.82 15205.52 00:06:57.614 ======================================================== 00:06:57.614 Total : 15784.10 61.66 4055.40 849.82 15205.52 00:06:57.614 00:06:57.614 03:58:10 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:57.614 03:58:10 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:57.614 03:58:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:57.614 03:58:10 -- nvmf/common.sh@117 -- # sync 00:06:57.614 03:58:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.614 03:58:10 -- nvmf/common.sh@120 -- # set +e 00:06:57.614 03:58:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.614 03:58:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.614 rmmod nvme_tcp 00:06:57.614 rmmod nvme_fabrics 00:06:57.614 rmmod nvme_keyring 00:06:57.614 03:58:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.614 03:58:10 -- nvmf/common.sh@124 -- # set -e 00:06:57.614 03:58:10 -- nvmf/common.sh@125 -- # return 0 00:06:57.614 03:58:10 -- nvmf/common.sh@478 -- # '[' -n 3662836 ']' 00:06:57.614 03:58:10 -- nvmf/common.sh@479 -- # killprocess 3662836 00:06:57.614 03:58:10 -- common/autotest_common.sh@936 -- # '[' -z 3662836 ']' 00:06:57.614 03:58:10 -- common/autotest_common.sh@940 -- # kill -0 3662836 00:06:57.614 03:58:10 -- common/autotest_common.sh@941 -- # uname 00:06:57.614 03:58:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.614 03:58:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3662836 00:06:57.614 03:58:10 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:57.614 03:58:10 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:57.614 03:58:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3662836' 00:06:57.614 killing process with pid 3662836 00:06:57.614 03:58:10 -- common/autotest_common.sh@955 -- # kill 3662836 00:06:57.614 03:58:10 -- common/autotest_common.sh@960 -- # wait 3662836 00:06:57.614 nvmf threads initialize successfully 00:06:57.614 bdev subsystem init successfully 00:06:57.614 created a nvmf target service 00:06:57.614 create targets's poll groups done 00:06:57.614 all subsystems of target started 00:06:57.614 nvmf target is running 00:06:57.614 all subsystems of target stopped 00:06:57.614 destroy targets's poll groups done 00:06:57.614 destroyed the nvmf target service 00:06:57.614 bdev subsystem finish successfully 00:06:57.614 nvmf threads destroy successfully 00:06:57.614 03:58:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:57.614 03:58:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:57.614 03:58:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:57.615 03:58:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.615 03:58:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.615 03:58:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.615 03:58:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.615 03:58:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.183 03:58:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:58.183 03:58:12 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:58.183 03:58:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:58.183 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 00:06:58.183 real 0m19.763s 00:06:58.183 user 0m46.883s 00:06:58.183 sys 0m5.725s 00:06:58.183 03:58:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.183 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 ************************************ 00:06:58.183 END TEST nvmf_example 00:06:58.183 ************************************ 00:06:58.183 03:58:12 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.183 03:58:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:58.183 03:58:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.183 03:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.443 ************************************ 00:06:58.443 START TEST nvmf_filesystem 00:06:58.443 ************************************ 00:06:58.443 03:58:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:58.443 * Looking for test storage... 00:06:58.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.443 03:58:12 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:58.443 03:58:12 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:58.443 03:58:12 -- common/autotest_common.sh@34 -- # set -e 00:06:58.443 03:58:12 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:58.443 03:58:12 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:58.443 03:58:12 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:58.443 03:58:12 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:58.443 03:58:12 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:58.443 03:58:12 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:58.443 03:58:12 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:58.443 03:58:12 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:58.443 03:58:12 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:58.443 03:58:12 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:58.443 03:58:12 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:58.443 03:58:12 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:58.443 03:58:12 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:58.444 03:58:12 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:58.444 03:58:12 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:58.444 03:58:12 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:58.444 03:58:12 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:58.444 03:58:12 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:58.444 03:58:12 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:58.444 03:58:12 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:58.444 03:58:12 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:58.444 03:58:12 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:58.444 03:58:12 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.444 03:58:12 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:58.444 03:58:12 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:58.444 03:58:12 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:58.444 03:58:12 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:58.444 03:58:12 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:58.444 03:58:12 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:58.444 03:58:12 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:58.444 03:58:12 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:58.444 03:58:12 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:58.444 03:58:12 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:58.444 03:58:12 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:58.444 03:58:12 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:58.444 03:58:12 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:58.444 03:58:12 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:58.444 03:58:12 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:58.444 03:58:12 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.444 03:58:12 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:58.444 03:58:12 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:58.444 03:58:12 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:58.444 03:58:12 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:58.444 03:58:12 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:58.444 03:58:12 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:58.444 03:58:12 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:58.444 03:58:12 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:58.444 03:58:12 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:58.444 03:58:12 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:58.444 03:58:12 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:58.444 03:58:12 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:58.444 03:58:12 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:58.444 03:58:12 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:58.444 03:58:12 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:58.444 03:58:12 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:58.444 03:58:12 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:58.444 03:58:12 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:58.444 03:58:12 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:58.444 03:58:12 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:58.444 03:58:12 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:58.444 03:58:12 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:58.444 03:58:12 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:58.444 03:58:12 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:58.444 03:58:12 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:58.444 03:58:12 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:58.444 03:58:12 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:58.444 03:58:12 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:58.444 03:58:12 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:58.444 03:58:12 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:58.444 03:58:12 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:58.444 03:58:12 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:58.444 03:58:12 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:58.444 03:58:12 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:58.444 03:58:12 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:58.444 03:58:12 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:58.444 03:58:12 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:58.444 03:58:12 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:58.444 03:58:12 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:58.444 03:58:12 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:58.444 03:58:12 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:58.444 03:58:12 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:58.444 03:58:12 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.444 03:58:12 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:58.444 03:58:12 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.444 03:58:12 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:58.444 03:58:12 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.444 03:58:12 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.444 03:58:12 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.444 03:58:12 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.444 03:58:12 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:58.444 03:58:12 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:58.444 03:58:12 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:58.444 03:58:12 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:58.444 03:58:12 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:58.444 03:58:12 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:58.444 03:58:12 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:58.444 03:58:12 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:58.444 #define SPDK_CONFIG_H 00:06:58.444 #define SPDK_CONFIG_APPS 1 00:06:58.444 #define SPDK_CONFIG_ARCH native 00:06:58.444 #undef SPDK_CONFIG_ASAN 00:06:58.444 #undef SPDK_CONFIG_AVAHI 00:06:58.444 #undef SPDK_CONFIG_CET 00:06:58.444 #define SPDK_CONFIG_COVERAGE 1 00:06:58.444 #define SPDK_CONFIG_CROSS_PREFIX 00:06:58.444 #undef SPDK_CONFIG_CRYPTO 00:06:58.444 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:58.444 #undef SPDK_CONFIG_CUSTOMOCF 00:06:58.444 #undef SPDK_CONFIG_DAOS 00:06:58.444 #define SPDK_CONFIG_DAOS_DIR 00:06:58.444 #define SPDK_CONFIG_DEBUG 1 00:06:58.444 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:58.444 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:58.444 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:58.444 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:58.444 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:58.444 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:58.444 #define SPDK_CONFIG_EXAMPLES 1 00:06:58.444 #undef SPDK_CONFIG_FC 00:06:58.444 #define SPDK_CONFIG_FC_PATH 00:06:58.444 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:58.444 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:58.444 #undef SPDK_CONFIG_FUSE 00:06:58.444 #undef SPDK_CONFIG_FUZZER 00:06:58.444 #define SPDK_CONFIG_FUZZER_LIB 00:06:58.444 #undef SPDK_CONFIG_GOLANG 00:06:58.444 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:58.444 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:58.444 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:58.444 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:58.444 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:58.444 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:58.444 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:58.444 #define SPDK_CONFIG_IDXD 1 00:06:58.444 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:58.444 #undef SPDK_CONFIG_IPSEC_MB 00:06:58.444 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:58.444 #define SPDK_CONFIG_ISAL 1 00:06:58.444 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:58.444 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:58.444 #define SPDK_CONFIG_LIBDIR 00:06:58.444 #undef SPDK_CONFIG_LTO 00:06:58.444 #define SPDK_CONFIG_MAX_LCORES 00:06:58.444 #define SPDK_CONFIG_NVME_CUSE 1 00:06:58.444 #undef SPDK_CONFIG_OCF 00:06:58.444 #define SPDK_CONFIG_OCF_PATH 00:06:58.444 #define SPDK_CONFIG_OPENSSL_PATH 00:06:58.444 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:58.444 #define SPDK_CONFIG_PGO_DIR 00:06:58.444 #undef SPDK_CONFIG_PGO_USE 00:06:58.444 #define SPDK_CONFIG_PREFIX /usr/local 00:06:58.444 #undef SPDK_CONFIG_RAID5F 00:06:58.444 #undef SPDK_CONFIG_RBD 00:06:58.444 #define SPDK_CONFIG_RDMA 1 00:06:58.444 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:58.444 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:58.444 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:58.444 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:58.444 #define SPDK_CONFIG_SHARED 1 00:06:58.445 #undef SPDK_CONFIG_SMA 00:06:58.445 #define SPDK_CONFIG_TESTS 1 00:06:58.445 #undef SPDK_CONFIG_TSAN 00:06:58.445 #define SPDK_CONFIG_UBLK 1 00:06:58.445 #define SPDK_CONFIG_UBSAN 1 00:06:58.445 #undef SPDK_CONFIG_UNIT_TESTS 00:06:58.445 #undef SPDK_CONFIG_URING 00:06:58.445 #define SPDK_CONFIG_URING_PATH 00:06:58.445 #undef SPDK_CONFIG_URING_ZNS 00:06:58.445 #undef SPDK_CONFIG_USDT 00:06:58.445 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:58.445 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:58.445 #define SPDK_CONFIG_VFIO_USER 1 00:06:58.445 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:58.445 #define SPDK_CONFIG_VHOST 1 00:06:58.445 #define SPDK_CONFIG_VIRTIO 1 00:06:58.445 #undef SPDK_CONFIG_VTUNE 00:06:58.445 #define SPDK_CONFIG_VTUNE_DIR 00:06:58.445 #define SPDK_CONFIG_WERROR 1 00:06:58.445 #define SPDK_CONFIG_WPDK_DIR 00:06:58.445 #undef SPDK_CONFIG_XNVME 00:06:58.445 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:58.445 03:58:12 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:58.445 03:58:12 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.445 03:58:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.445 03:58:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.445 03:58:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.445 03:58:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.445 03:58:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.445 03:58:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.445 03:58:12 -- paths/export.sh@5 -- # export PATH 00:06:58.445 03:58:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.445 03:58:12 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.445 03:58:12 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:58.445 03:58:12 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.445 03:58:12 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:58.445 03:58:12 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:58.445 03:58:12 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.445 03:58:12 -- pm/common@67 -- # TEST_TAG=N/A 00:06:58.445 03:58:12 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:58.445 03:58:12 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.445 03:58:12 -- pm/common@71 -- # uname -s 00:06:58.445 03:58:12 -- pm/common@71 -- # PM_OS=Linux 00:06:58.445 03:58:12 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:58.445 03:58:12 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:58.445 03:58:12 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:58.445 03:58:12 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:58.445 03:58:12 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:58.445 03:58:12 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:58.445 03:58:12 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:58.445 03:58:12 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:58.445 03:58:12 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:58.445 03:58:12 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:58.445 03:58:12 -- common/autotest_common.sh@57 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:58.445 03:58:12 -- common/autotest_common.sh@61 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:58.445 03:58:12 -- common/autotest_common.sh@63 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:58.445 03:58:12 -- common/autotest_common.sh@65 -- # : 1 00:06:58.445 03:58:12 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:58.445 03:58:12 -- common/autotest_common.sh@67 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:58.445 03:58:12 -- common/autotest_common.sh@69 -- # : 00:06:58.445 03:58:12 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:58.445 03:58:12 -- common/autotest_common.sh@71 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:58.445 03:58:12 -- common/autotest_common.sh@73 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:58.445 03:58:12 -- common/autotest_common.sh@75 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:58.445 03:58:12 -- common/autotest_common.sh@77 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:58.445 03:58:12 -- common/autotest_common.sh@79 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:58.445 03:58:12 -- common/autotest_common.sh@81 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:58.445 03:58:12 -- common/autotest_common.sh@83 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:58.445 03:58:12 -- common/autotest_common.sh@85 -- # : 1 00:06:58.445 03:58:12 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:58.445 03:58:12 -- common/autotest_common.sh@87 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:58.445 03:58:12 -- common/autotest_common.sh@89 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:58.445 03:58:12 -- common/autotest_common.sh@91 -- # : 1 00:06:58.445 03:58:12 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:58.445 03:58:12 -- common/autotest_common.sh@93 -- # : 1 00:06:58.445 03:58:12 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:58.445 03:58:12 -- common/autotest_common.sh@95 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:58.445 03:58:12 -- common/autotest_common.sh@97 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:58.445 03:58:12 -- common/autotest_common.sh@99 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:58.445 03:58:12 -- common/autotest_common.sh@101 -- # : tcp 00:06:58.445 03:58:12 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:58.445 03:58:12 -- common/autotest_common.sh@103 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:58.445 03:58:12 -- common/autotest_common.sh@105 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:58.445 03:58:12 -- common/autotest_common.sh@107 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:58.445 03:58:12 -- common/autotest_common.sh@109 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:58.445 03:58:12 -- common/autotest_common.sh@111 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:58.445 03:58:12 -- common/autotest_common.sh@113 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:58.445 03:58:12 -- common/autotest_common.sh@115 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:58.445 03:58:12 -- common/autotest_common.sh@117 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:58.445 03:58:12 -- common/autotest_common.sh@119 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:58.445 03:58:12 -- common/autotest_common.sh@121 -- # : 1 00:06:58.445 03:58:12 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:58.445 03:58:12 -- common/autotest_common.sh@123 -- # : 00:06:58.445 03:58:12 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:58.445 03:58:12 -- common/autotest_common.sh@125 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:58.445 03:58:12 -- common/autotest_common.sh@127 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:58.445 03:58:12 -- common/autotest_common.sh@129 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:58.445 03:58:12 -- common/autotest_common.sh@131 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:58.445 03:58:12 -- common/autotest_common.sh@133 -- # : 0 00:06:58.445 03:58:12 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:58.445 03:58:12 -- common/autotest_common.sh@135 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:58.446 03:58:12 -- common/autotest_common.sh@137 -- # : 00:06:58.446 03:58:12 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:58.446 03:58:12 -- common/autotest_common.sh@139 -- # : true 00:06:58.446 03:58:12 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:58.446 03:58:12 -- common/autotest_common.sh@141 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:58.446 03:58:12 -- common/autotest_common.sh@143 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:58.446 03:58:12 -- common/autotest_common.sh@145 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:58.446 03:58:12 -- common/autotest_common.sh@147 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:58.446 03:58:12 -- common/autotest_common.sh@149 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:58.446 03:58:12 -- common/autotest_common.sh@151 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:58.446 03:58:12 -- common/autotest_common.sh@153 -- # : e810 00:06:58.446 03:58:12 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:58.446 03:58:12 -- common/autotest_common.sh@155 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:58.446 03:58:12 -- common/autotest_common.sh@157 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:58.446 03:58:12 -- common/autotest_common.sh@159 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:58.446 03:58:12 -- common/autotest_common.sh@161 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:58.446 03:58:12 -- common/autotest_common.sh@163 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:58.446 03:58:12 -- common/autotest_common.sh@166 -- # : 00:06:58.446 03:58:12 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:58.446 03:58:12 -- common/autotest_common.sh@168 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:58.446 03:58:12 -- common/autotest_common.sh@170 -- # : 0 00:06:58.446 03:58:12 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:58.446 03:58:12 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:58.446 03:58:12 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.446 03:58:12 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:58.446 03:58:12 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.446 03:58:12 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:58.706 03:58:12 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:58.706 03:58:12 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:58.706 03:58:12 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.706 03:58:12 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:58.706 03:58:12 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.706 03:58:12 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:58.706 03:58:12 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:58.706 03:58:12 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:58.706 03:58:12 -- common/autotest_common.sh@199 -- # cat 00:06:58.706 03:58:12 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:58.706 03:58:12 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.706 03:58:12 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:58.706 03:58:12 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.706 03:58:12 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:58.706 03:58:12 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:58.706 03:58:12 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:58.706 03:58:12 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.706 03:58:12 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:58.706 03:58:12 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.706 03:58:12 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:58.706 03:58:12 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.706 03:58:12 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.706 03:58:12 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.706 03:58:12 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.706 03:58:12 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.706 03:58:12 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:58.706 03:58:12 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.706 03:58:12 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.706 03:58:12 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:58.706 03:58:12 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:58.706 03:58:12 -- common/autotest_common.sh@252 -- # valgrind= 00:06:58.706 03:58:12 -- common/autotest_common.sh@258 -- # uname -s 00:06:58.706 03:58:12 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:58.706 03:58:12 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:58.706 03:58:12 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:58.706 03:58:12 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:58.707 03:58:12 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:58.707 03:58:12 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:06:58.707 03:58:12 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:58.707 03:58:12 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:58.707 03:58:12 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:58.707 03:58:12 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:58.707 03:58:12 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:58.707 03:58:12 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:58.707 03:58:12 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:58.707 03:58:12 -- common/autotest_common.sh@307 -- # [[ -z 3665578 ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@307 -- # kill -0 3665578 00:06:58.707 03:58:12 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:58.707 03:58:12 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:58.707 03:58:12 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:58.707 03:58:12 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:58.707 03:58:12 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:58.707 03:58:12 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:58.707 03:58:12 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:58.707 03:58:12 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.U3beX6 00:06:58.707 03:58:12 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:58.707 03:58:12 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:58.707 03:58:12 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.U3beX6/tests/target /tmp/spdk.U3beX6 00:06:58.707 03:58:13 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@316 -- # df -T 00:06:58.707 03:58:13 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=995520512 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288909312 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=86837919744 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=94501433344 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=7663513600 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=47247339520 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47250714624 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=3375104 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=18890833920 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=18900287488 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=9453568 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=47250231296 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=47250718720 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=487424 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=9450135552 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9450139648 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=9450135552 00:06:58.707 03:58:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=9450139648 00:06:58.707 03:58:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:58.707 03:58:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:58.707 03:58:13 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:58.707 * Looking for test storage... 00:06:58.707 03:58:13 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:58.707 03:58:13 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:58.707 03:58:13 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.707 03:58:13 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:58.707 03:58:13 -- common/autotest_common.sh@361 -- # mount=/ 00:06:58.707 03:58:13 -- common/autotest_common.sh@363 -- # target_space=86837919744 00:06:58.707 03:58:13 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:58.707 03:58:13 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:58.707 03:58:13 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:58.707 03:58:13 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:58.707 03:58:13 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:58.707 03:58:13 -- common/autotest_common.sh@370 -- # new_size=9878106112 00:06:58.707 03:58:13 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:58.707 03:58:13 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.707 03:58:13 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.707 03:58:13 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.707 03:58:13 -- common/autotest_common.sh@378 -- # return 0 00:06:58.707 03:58:13 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:58.707 03:58:13 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:58.707 03:58:13 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:58.707 03:58:13 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:58.707 03:58:13 -- common/autotest_common.sh@1673 -- # true 00:06:58.707 03:58:13 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:58.707 03:58:13 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:58.707 03:58:13 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:58.707 03:58:13 -- common/autotest_common.sh@27 -- # exec 00:06:58.707 03:58:13 -- common/autotest_common.sh@29 -- # exec 00:06:58.707 03:58:13 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:58.707 03:58:13 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:58.707 03:58:13 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:58.707 03:58:13 -- common/autotest_common.sh@18 -- # set -x 00:06:58.707 03:58:13 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.707 03:58:13 -- nvmf/common.sh@7 -- # uname -s 00:06:58.707 03:58:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.707 03:58:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.707 03:58:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.707 03:58:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.707 03:58:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.707 03:58:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.707 03:58:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.707 03:58:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.707 03:58:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.707 03:58:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.707 03:58:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:06:58.707 03:58:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:06:58.707 03:58:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.707 03:58:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.707 03:58:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.707 03:58:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.707 03:58:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.707 03:58:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.707 03:58:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.707 03:58:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.708 03:58:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.708 03:58:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.708 03:58:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.708 03:58:13 -- paths/export.sh@5 -- # export PATH 00:06:58.708 03:58:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.708 03:58:13 -- nvmf/common.sh@47 -- # : 0 00:06:58.708 03:58:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:58.708 03:58:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:58.708 03:58:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.708 03:58:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.708 03:58:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.708 03:58:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:58.708 03:58:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:58.708 03:58:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:58.708 03:58:13 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:58.708 03:58:13 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:58.708 03:58:13 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:58.708 03:58:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:58.708 03:58:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.708 03:58:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:58.708 03:58:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:58.708 03:58:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:58.708 03:58:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.708 03:58:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:58.708 03:58:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.708 03:58:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:58.708 03:58:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:58.708 03:58:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:58.708 03:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.277 03:58:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:05.277 03:58:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.277 03:58:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.277 03:58:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.277 03:58:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.277 03:58:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.277 03:58:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.277 03:58:18 -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.277 03:58:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.277 03:58:18 -- nvmf/common.sh@296 -- # e810=() 00:07:05.277 03:58:18 -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.277 03:58:18 -- nvmf/common.sh@297 -- # x722=() 00:07:05.277 03:58:18 -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.277 03:58:18 -- nvmf/common.sh@298 -- # mlx=() 00:07:05.277 03:58:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.277 03:58:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.277 03:58:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.277 03:58:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.277 03:58:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.277 03:58:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.277 03:58:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:05.277 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:05.277 03:58:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.277 03:58:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:05.277 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:05.277 03:58:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.277 03:58:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.277 03:58:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.277 03:58:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.277 03:58:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:05.277 03:58:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.277 03:58:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:05.277 Found net devices under 0000:af:00.0: cvl_0_0 00:07:05.277 03:58:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.277 03:58:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.277 03:58:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.277 03:58:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:05.277 03:58:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.277 03:58:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:05.277 Found net devices under 0000:af:00.1: cvl_0_1 00:07:05.277 03:58:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.277 03:58:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:05.277 03:58:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:05.277 03:58:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:05.278 03:58:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:05.278 03:58:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:05.278 03:58:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.278 03:58:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.278 03:58:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.278 03:58:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.278 03:58:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.278 03:58:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.278 03:58:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.278 03:58:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.278 03:58:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.278 03:58:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.278 03:58:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.278 03:58:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.278 03:58:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.278 03:58:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.278 03:58:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.278 03:58:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.278 03:58:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.278 03:58:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.278 03:58:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.278 03:58:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:07:05.278 00:07:05.278 --- 10.0.0.2 ping statistics --- 00:07:05.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.278 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:07:05.278 03:58:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:07:05.278 00:07:05.278 --- 10.0.0.1 ping statistics --- 00:07:05.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.278 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:05.278 03:58:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.278 03:58:18 -- nvmf/common.sh@411 -- # return 0 00:07:05.278 03:58:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:05.278 03:58:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.278 03:58:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:05.278 03:58:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:05.278 03:58:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.278 03:58:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:05.278 03:58:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:05.278 03:58:18 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:05.278 03:58:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.278 03:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.278 03:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:05.278 ************************************ 00:07:05.278 START TEST nvmf_filesystem_no_in_capsule 00:07:05.278 ************************************ 00:07:05.278 03:58:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:05.278 03:58:19 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:05.278 03:58:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:05.278 03:58:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:05.278 03:58:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:05.278 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.278 03:58:19 -- nvmf/common.sh@470 -- # nvmfpid=3668746 00:07:05.278 03:58:19 -- nvmf/common.sh@471 -- # waitforlisten 3668746 00:07:05.278 03:58:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.278 03:58:19 -- common/autotest_common.sh@817 -- # '[' -z 3668746 ']' 00:07:05.278 03:58:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.278 03:58:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.278 03:58:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.278 03:58:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.278 03:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.278 [2024-04-19 03:58:19.195452] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:05.278 [2024-04-19 03:58:19.195505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.278 [2024-04-19 03:58:19.281962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.278 [2024-04-19 03:58:19.374939] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.278 [2024-04-19 03:58:19.374978] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.278 [2024-04-19 03:58:19.374989] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.278 [2024-04-19 03:58:19.374998] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.278 [2024-04-19 03:58:19.375005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.278 [2024-04-19 03:58:19.375051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.278 [2024-04-19 03:58:19.375151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.278 [2024-04-19 03:58:19.375266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.278 [2024-04-19 03:58:19.375267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.848 03:58:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:05.848 03:58:20 -- common/autotest_common.sh@850 -- # return 0 00:07:05.848 03:58:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:05.848 03:58:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 03:58:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.848 03:58:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:05.848 03:58:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 [2024-04-19 03:58:20.180320] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 [2024-04-19 03:58:20.345376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:05.848 03:58:20 -- common/autotest_common.sh@1366 -- # local bs 00:07:05.848 03:58:20 -- common/autotest_common.sh@1367 -- # local nb 00:07:05.848 03:58:20 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:05.848 03:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.848 03:58:20 -- common/autotest_common.sh@10 -- # set +x 00:07:05.848 03:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.848 03:58:20 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:05.848 { 00:07:05.848 "name": "Malloc1", 00:07:05.848 "aliases": [ 00:07:05.848 "f16ae863-f2c8-4f2d-b81d-c4df54d94dc2" 00:07:05.848 ], 00:07:05.848 "product_name": "Malloc disk", 00:07:05.848 "block_size": 512, 00:07:05.848 "num_blocks": 1048576, 00:07:05.848 "uuid": "f16ae863-f2c8-4f2d-b81d-c4df54d94dc2", 00:07:05.848 "assigned_rate_limits": { 00:07:05.848 "rw_ios_per_sec": 0, 00:07:05.848 "rw_mbytes_per_sec": 0, 00:07:05.848 "r_mbytes_per_sec": 0, 00:07:05.848 "w_mbytes_per_sec": 0 00:07:05.848 }, 00:07:05.848 "claimed": true, 00:07:05.848 "claim_type": "exclusive_write", 00:07:05.848 "zoned": false, 00:07:05.848 "supported_io_types": { 00:07:05.848 "read": true, 00:07:05.848 "write": true, 00:07:05.848 "unmap": true, 00:07:05.848 "write_zeroes": true, 00:07:05.848 "flush": true, 00:07:05.848 "reset": true, 00:07:05.848 "compare": false, 00:07:05.848 "compare_and_write": false, 00:07:05.848 "abort": true, 00:07:05.848 "nvme_admin": false, 00:07:05.848 "nvme_io": false 00:07:05.848 }, 00:07:05.848 "memory_domains": [ 00:07:05.848 { 00:07:05.848 "dma_device_id": "system", 00:07:05.848 "dma_device_type": 1 00:07:05.848 }, 00:07:05.848 { 00:07:05.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.848 "dma_device_type": 2 00:07:05.848 } 00:07:05.848 ], 00:07:05.848 "driver_specific": {} 00:07:05.849 } 00:07:05.849 ]' 00:07:06.132 03:58:20 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:06.132 03:58:20 -- common/autotest_common.sh@1369 -- # bs=512 00:07:06.132 03:58:20 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:06.132 03:58:20 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:06.132 03:58:20 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:06.132 03:58:20 -- common/autotest_common.sh@1374 -- # echo 512 00:07:06.132 03:58:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:06.132 03:58:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.524 03:58:21 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.524 03:58:21 -- common/autotest_common.sh@1184 -- # local i=0 00:07:07.524 03:58:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.524 03:58:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:07.524 03:58:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:09.423 03:58:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:09.423 03:58:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:09.423 03:58:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.423 03:58:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:09.423 03:58:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.423 03:58:23 -- common/autotest_common.sh@1194 -- # return 0 00:07:09.423 03:58:23 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:09.423 03:58:23 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:09.423 03:58:23 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:09.423 03:58:23 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:09.423 03:58:23 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:09.424 03:58:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:09.424 03:58:23 -- setup/common.sh@80 -- # echo 536870912 00:07:09.424 03:58:23 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:09.424 03:58:23 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:09.424 03:58:23 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:09.424 03:58:23 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:09.682 03:58:24 -- target/filesystem.sh@69 -- # partprobe 00:07:09.940 03:58:24 -- target/filesystem.sh@70 -- # sleep 1 00:07:10.877 03:58:25 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:10.877 03:58:25 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:10.877 03:58:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:10.877 03:58:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.877 03:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.135 ************************************ 00:07:11.135 START TEST filesystem_ext4 00:07:11.135 ************************************ 00:07:11.135 03:58:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.135 03:58:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.135 03:58:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.135 03:58:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.135 03:58:25 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:11.135 03:58:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:11.135 03:58:25 -- common/autotest_common.sh@914 -- # local i=0 00:07:11.135 03:58:25 -- common/autotest_common.sh@915 -- # local force 00:07:11.135 03:58:25 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:11.135 03:58:25 -- common/autotest_common.sh@918 -- # force=-F 00:07:11.135 03:58:25 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.135 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.136 Discarding device blocks: 0/522240 done 00:07:11.136 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.136 Filesystem UUID: 1e84d73b-d3bb-4b89-81da-92fd413b66f7 00:07:11.136 Superblock backups stored on blocks: 00:07:11.136 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.136 00:07:11.136 Allocating group tables: 0/64 done 00:07:11.136 Writing inode tables: 0/64 done 00:07:11.136 Creating journal (8192 blocks): done 00:07:11.136 Writing superblocks and filesystem accounting information: 0/64 done 00:07:11.136 00:07:11.136 03:58:25 -- common/autotest_common.sh@931 -- # return 0 00:07:11.136 03:58:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.394 03:58:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.394 03:58:25 -- target/filesystem.sh@25 -- # sync 00:07:11.394 03:58:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.394 03:58:25 -- target/filesystem.sh@27 -- # sync 00:07:11.394 03:58:25 -- target/filesystem.sh@29 -- # i=0 00:07:11.394 03:58:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.394 03:58:25 -- target/filesystem.sh@37 -- # kill -0 3668746 00:07:11.394 03:58:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:11.394 03:58:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:11.394 03:58:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:11.394 03:58:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:11.394 00:07:11.394 real 0m0.424s 00:07:11.394 user 0m0.024s 00:07:11.394 sys 0m0.063s 00:07:11.394 03:58:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.394 03:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.394 ************************************ 00:07:11.394 END TEST filesystem_ext4 00:07:11.394 ************************************ 00:07:11.394 03:58:25 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:11.394 03:58:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:11.394 03:58:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.394 03:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.653 ************************************ 00:07:11.653 START TEST filesystem_btrfs 00:07:11.653 ************************************ 00:07:11.653 03:58:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:11.653 03:58:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:11.653 03:58:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.653 03:58:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:11.653 03:58:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:11.653 03:58:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:11.653 03:58:26 -- common/autotest_common.sh@914 -- # local i=0 00:07:11.653 03:58:26 -- common/autotest_common.sh@915 -- # local force 00:07:11.653 03:58:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:11.653 03:58:26 -- common/autotest_common.sh@920 -- # force=-f 00:07:11.653 03:58:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:11.912 btrfs-progs v6.6.2 00:07:11.912 See https://btrfs.readthedocs.io for more information. 00:07:11.912 00:07:11.912 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:11.912 NOTE: several default settings have changed in version 5.15, please make sure 00:07:11.912 this does not affect your deployments: 00:07:11.912 - DUP for metadata (-m dup) 00:07:11.912 - enabled no-holes (-O no-holes) 00:07:11.912 - enabled free-space-tree (-R free-space-tree) 00:07:11.912 00:07:11.912 Label: (null) 00:07:11.912 UUID: 4ea69445-3e78-4c86-bbc3-c9316d776254 00:07:11.912 Node size: 16384 00:07:11.912 Sector size: 4096 00:07:11.912 Filesystem size: 510.00MiB 00:07:11.912 Block group profiles: 00:07:11.912 Data: single 8.00MiB 00:07:11.912 Metadata: DUP 32.00MiB 00:07:11.912 System: DUP 8.00MiB 00:07:11.912 SSD detected: yes 00:07:11.912 Zoned device: no 00:07:11.912 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:11.912 Runtime features: free-space-tree 00:07:11.912 Checksum: crc32c 00:07:11.912 Number of devices: 1 00:07:11.912 Devices: 00:07:11.912 ID SIZE PATH 00:07:11.912 1 510.00MiB /dev/nvme0n1p1 00:07:11.912 00:07:11.912 03:58:26 -- common/autotest_common.sh@931 -- # return 0 00:07:11.912 03:58:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.849 03:58:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.849 03:58:27 -- target/filesystem.sh@25 -- # sync 00:07:12.849 03:58:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.849 03:58:27 -- target/filesystem.sh@27 -- # sync 00:07:12.849 03:58:27 -- target/filesystem.sh@29 -- # i=0 00:07:12.849 03:58:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.849 03:58:27 -- target/filesystem.sh@37 -- # kill -0 3668746 00:07:12.849 03:58:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.849 03:58:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.849 03:58:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.849 03:58:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.849 00:07:12.849 real 0m1.232s 00:07:12.849 user 0m0.028s 00:07:12.849 sys 0m0.120s 00:07:12.849 03:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.849 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:12.849 ************************************ 00:07:12.849 END TEST filesystem_btrfs 00:07:12.849 ************************************ 00:07:12.849 03:58:27 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:12.849 03:58:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:12.849 03:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.849 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:13.108 ************************************ 00:07:13.108 START TEST filesystem_xfs 00:07:13.108 ************************************ 00:07:13.108 03:58:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.108 03:58:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.108 03:58:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.108 03:58:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.108 03:58:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:13.108 03:58:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:13.108 03:58:27 -- common/autotest_common.sh@914 -- # local i=0 00:07:13.108 03:58:27 -- common/autotest_common.sh@915 -- # local force 00:07:13.108 03:58:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:13.108 03:58:27 -- common/autotest_common.sh@920 -- # force=-f 00:07:13.108 03:58:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.108 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.108 = sectsz=512 attr=2, projid32bit=1 00:07:13.108 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.108 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.108 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.108 = sunit=0 swidth=0 blks 00:07:13.108 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.108 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.108 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.108 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.043 Discarding blocks...Done. 00:07:14.043 03:58:28 -- common/autotest_common.sh@931 -- # return 0 00:07:14.043 03:58:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.720 03:58:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.720 03:58:30 -- target/filesystem.sh@25 -- # sync 00:07:16.720 03:58:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.720 03:58:30 -- target/filesystem.sh@27 -- # sync 00:07:16.720 03:58:30 -- target/filesystem.sh@29 -- # i=0 00:07:16.720 03:58:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.720 03:58:31 -- target/filesystem.sh@37 -- # kill -0 3668746 00:07:16.720 03:58:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.720 03:58:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.720 03:58:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.720 03:58:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.720 00:07:16.720 real 0m3.573s 00:07:16.720 user 0m0.023s 00:07:16.720 sys 0m0.073s 00:07:16.720 03:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.720 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.720 ************************************ 00:07:16.720 END TEST filesystem_xfs 00:07:16.720 ************************************ 00:07:16.720 03:58:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:16.720 03:58:31 -- target/filesystem.sh@93 -- # sync 00:07:16.720 03:58:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.720 03:58:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.720 03:58:31 -- common/autotest_common.sh@1205 -- # local i=0 00:07:16.979 03:58:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:16.979 03:58:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.979 03:58:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:16.979 03:58:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.979 03:58:31 -- common/autotest_common.sh@1217 -- # return 0 00:07:16.979 03:58:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.979 03:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.979 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:16.979 03:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.979 03:58:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:16.979 03:58:31 -- target/filesystem.sh@101 -- # killprocess 3668746 00:07:16.979 03:58:31 -- common/autotest_common.sh@936 -- # '[' -z 3668746 ']' 00:07:16.979 03:58:31 -- common/autotest_common.sh@940 -- # kill -0 3668746 00:07:16.979 03:58:31 -- common/autotest_common.sh@941 -- # uname 00:07:16.979 03:58:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.979 03:58:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3668746 00:07:16.979 03:58:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.979 03:58:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.979 03:58:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3668746' 00:07:16.979 killing process with pid 3668746 00:07:16.979 03:58:31 -- common/autotest_common.sh@955 -- # kill 3668746 00:07:16.979 03:58:31 -- common/autotest_common.sh@960 -- # wait 3668746 00:07:17.239 03:58:31 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.239 00:07:17.239 real 0m12.589s 00:07:17.239 user 0m49.377s 00:07:17.239 sys 0m1.431s 00:07:17.239 03:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.239 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.239 ************************************ 00:07:17.239 END TEST nvmf_filesystem_no_in_capsule 00:07:17.239 ************************************ 00:07:17.239 03:58:31 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.239 03:58:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.239 03:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.239 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.498 ************************************ 00:07:17.498 START TEST nvmf_filesystem_in_capsule 00:07:17.498 ************************************ 00:07:17.498 03:58:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:17.498 03:58:31 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.498 03:58:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.498 03:58:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:17.498 03:58:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:17.498 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.498 03:58:31 -- nvmf/common.sh@470 -- # nvmfpid=3671361 00:07:17.498 03:58:31 -- nvmf/common.sh@471 -- # waitforlisten 3671361 00:07:17.498 03:58:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.498 03:58:31 -- common/autotest_common.sh@817 -- # '[' -z 3671361 ']' 00:07:17.498 03:58:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.498 03:58:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.498 03:58:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.498 03:58:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.498 03:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.498 [2024-04-19 03:58:31.963783] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:17.498 [2024-04-19 03:58:31.963838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.498 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.757 [2024-04-19 03:58:32.051145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.757 [2024-04-19 03:58:32.136130] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.757 [2024-04-19 03:58:32.136178] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.757 [2024-04-19 03:58:32.136190] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.757 [2024-04-19 03:58:32.136200] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.757 [2024-04-19 03:58:32.136207] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.757 [2024-04-19 03:58:32.136313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.757 [2024-04-19 03:58:32.136426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.757 [2024-04-19 03:58:32.136468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.757 [2024-04-19 03:58:32.136470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.757 03:58:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.757 03:58:32 -- common/autotest_common.sh@850 -- # return 0 00:07:17.757 03:58:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:17.757 03:58:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.757 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 03:58:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.016 03:58:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.016 03:58:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 [2024-04-19 03:58:32.297811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 [2024-04-19 03:58:32.455463] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:18.016 03:58:32 -- common/autotest_common.sh@1366 -- # local bs 00:07:18.016 03:58:32 -- common/autotest_common.sh@1367 -- # local nb 00:07:18.016 03:58:32 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.016 03:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.016 03:58:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 03:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.016 03:58:32 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:18.016 { 00:07:18.016 "name": "Malloc1", 00:07:18.016 "aliases": [ 00:07:18.016 "80fbd352-da71-4588-9560-20520bc89c68" 00:07:18.016 ], 00:07:18.016 "product_name": "Malloc disk", 00:07:18.016 "block_size": 512, 00:07:18.016 "num_blocks": 1048576, 00:07:18.016 "uuid": "80fbd352-da71-4588-9560-20520bc89c68", 00:07:18.016 "assigned_rate_limits": { 00:07:18.016 "rw_ios_per_sec": 0, 00:07:18.016 "rw_mbytes_per_sec": 0, 00:07:18.016 "r_mbytes_per_sec": 0, 00:07:18.016 "w_mbytes_per_sec": 0 00:07:18.016 }, 00:07:18.016 "claimed": true, 00:07:18.016 "claim_type": "exclusive_write", 00:07:18.016 "zoned": false, 00:07:18.016 "supported_io_types": { 00:07:18.016 "read": true, 00:07:18.016 "write": true, 00:07:18.016 "unmap": true, 00:07:18.016 "write_zeroes": true, 00:07:18.016 "flush": true, 00:07:18.017 "reset": true, 00:07:18.017 "compare": false, 00:07:18.017 "compare_and_write": false, 00:07:18.017 "abort": true, 00:07:18.017 "nvme_admin": false, 00:07:18.017 "nvme_io": false 00:07:18.017 }, 00:07:18.017 "memory_domains": [ 00:07:18.017 { 00:07:18.017 "dma_device_id": "system", 00:07:18.017 "dma_device_type": 1 00:07:18.017 }, 00:07:18.017 { 00:07:18.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.017 "dma_device_type": 2 00:07:18.017 } 00:07:18.017 ], 00:07:18.017 "driver_specific": {} 00:07:18.017 } 00:07:18.017 ]' 00:07:18.017 03:58:32 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:18.017 03:58:32 -- common/autotest_common.sh@1369 -- # bs=512 00:07:18.017 03:58:32 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:18.275 03:58:32 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:18.275 03:58:32 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:18.275 03:58:32 -- common/autotest_common.sh@1374 -- # echo 512 00:07:18.275 03:58:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.275 03:58:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.653 03:58:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.653 03:58:33 -- common/autotest_common.sh@1184 -- # local i=0 00:07:19.653 03:58:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.653 03:58:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:19.653 03:58:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:21.558 03:58:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:21.558 03:58:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:21.558 03:58:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.558 03:58:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:21.558 03:58:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.558 03:58:35 -- common/autotest_common.sh@1194 -- # return 0 00:07:21.558 03:58:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.558 03:58:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.558 03:58:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.558 03:58:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.558 03:58:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.558 03:58:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.558 03:58:35 -- setup/common.sh@80 -- # echo 536870912 00:07:21.558 03:58:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.558 03:58:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.558 03:58:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.558 03:58:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:21.817 03:58:36 -- target/filesystem.sh@69 -- # partprobe 00:07:22.755 03:58:37 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.690 03:58:38 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:23.691 03:58:38 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.691 03:58:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:23.691 03:58:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.691 03:58:38 -- common/autotest_common.sh@10 -- # set +x 00:07:23.691 ************************************ 00:07:23.691 START TEST filesystem_in_capsule_ext4 00:07:23.691 ************************************ 00:07:23.691 03:58:38 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.691 03:58:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.691 03:58:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.691 03:58:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.691 03:58:38 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:23.691 03:58:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:23.691 03:58:38 -- common/autotest_common.sh@914 -- # local i=0 00:07:23.691 03:58:38 -- common/autotest_common.sh@915 -- # local force 00:07:23.691 03:58:38 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:23.691 03:58:38 -- common/autotest_common.sh@918 -- # force=-F 00:07:23.691 03:58:38 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.691 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.950 Discarding device blocks: 0/522240 done 00:07:23.950 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.950 Filesystem UUID: 99783e07-2b0e-47d4-80cf-c14c2d2f1499 00:07:23.950 Superblock backups stored on blocks: 00:07:23.950 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.950 00:07:23.950 Allocating group tables: 0/64 done 00:07:23.950 Writing inode tables: 0/64 done 00:07:23.950 Creating journal (8192 blocks): done 00:07:25.035 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:25.035 00:07:25.035 03:58:39 -- common/autotest_common.sh@931 -- # return 0 00:07:25.035 03:58:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.602 03:58:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.602 03:58:40 -- target/filesystem.sh@25 -- # sync 00:07:25.602 03:58:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.603 03:58:40 -- target/filesystem.sh@27 -- # sync 00:07:25.603 03:58:40 -- target/filesystem.sh@29 -- # i=0 00:07:25.603 03:58:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.603 03:58:40 -- target/filesystem.sh@37 -- # kill -0 3671361 00:07:25.603 03:58:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.603 03:58:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.603 03:58:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.603 03:58:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.603 00:07:25.603 real 0m1.938s 00:07:25.603 user 0m0.022s 00:07:25.603 sys 0m0.069s 00:07:25.603 03:58:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.603 03:58:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 ************************************ 00:07:25.603 END TEST filesystem_in_capsule_ext4 00:07:25.603 ************************************ 00:07:25.862 03:58:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.862 03:58:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:25.862 03:58:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.862 03:58:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.862 ************************************ 00:07:25.862 START TEST filesystem_in_capsule_btrfs 00:07:25.862 ************************************ 00:07:25.862 03:58:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.862 03:58:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.862 03:58:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.862 03:58:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.862 03:58:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:25.862 03:58:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:25.862 03:58:40 -- common/autotest_common.sh@914 -- # local i=0 00:07:25.862 03:58:40 -- common/autotest_common.sh@915 -- # local force 00:07:25.862 03:58:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:25.862 03:58:40 -- common/autotest_common.sh@920 -- # force=-f 00:07:25.862 03:58:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:26.121 btrfs-progs v6.6.2 00:07:26.121 See https://btrfs.readthedocs.io for more information. 00:07:26.121 00:07:26.121 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:26.121 NOTE: several default settings have changed in version 5.15, please make sure 00:07:26.121 this does not affect your deployments: 00:07:26.121 - DUP for metadata (-m dup) 00:07:26.121 - enabled no-holes (-O no-holes) 00:07:26.121 - enabled free-space-tree (-R free-space-tree) 00:07:26.121 00:07:26.121 Label: (null) 00:07:26.121 UUID: aaac0d56-29e5-4745-bcfc-17f55f7ace12 00:07:26.121 Node size: 16384 00:07:26.121 Sector size: 4096 00:07:26.121 Filesystem size: 510.00MiB 00:07:26.121 Block group profiles: 00:07:26.121 Data: single 8.00MiB 00:07:26.121 Metadata: DUP 32.00MiB 00:07:26.121 System: DUP 8.00MiB 00:07:26.121 SSD detected: yes 00:07:26.121 Zoned device: no 00:07:26.121 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:26.121 Runtime features: free-space-tree 00:07:26.121 Checksum: crc32c 00:07:26.121 Number of devices: 1 00:07:26.121 Devices: 00:07:26.121 ID SIZE PATH 00:07:26.121 1 510.00MiB /dev/nvme0n1p1 00:07:26.121 00:07:26.121 03:58:40 -- common/autotest_common.sh@931 -- # return 0 00:07:26.121 03:58:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.059 03:58:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.059 03:58:41 -- target/filesystem.sh@25 -- # sync 00:07:27.059 03:58:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.059 03:58:41 -- target/filesystem.sh@27 -- # sync 00:07:27.059 03:58:41 -- target/filesystem.sh@29 -- # i=0 00:07:27.059 03:58:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.059 03:58:41 -- target/filesystem.sh@37 -- # kill -0 3671361 00:07:27.059 03:58:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.059 03:58:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.059 03:58:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.059 03:58:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.059 00:07:27.059 real 0m1.003s 00:07:27.059 user 0m0.031s 00:07:27.059 sys 0m0.121s 00:07:27.059 03:58:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.059 03:58:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 ************************************ 00:07:27.059 END TEST filesystem_in_capsule_btrfs 00:07:27.059 ************************************ 00:07:27.059 03:58:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:27.059 03:58:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:27.059 03:58:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.059 03:58:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 ************************************ 00:07:27.059 START TEST filesystem_in_capsule_xfs 00:07:27.059 ************************************ 00:07:27.059 03:58:41 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:27.059 03:58:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:27.059 03:58:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.059 03:58:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:27.059 03:58:41 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:27.059 03:58:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:27.059 03:58:41 -- common/autotest_common.sh@914 -- # local i=0 00:07:27.059 03:58:41 -- common/autotest_common.sh@915 -- # local force 00:07:27.059 03:58:41 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:27.059 03:58:41 -- common/autotest_common.sh@920 -- # force=-f 00:07:27.059 03:58:41 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:27.059 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:27.059 = sectsz=512 attr=2, projid32bit=1 00:07:27.059 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:27.059 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:27.059 data = bsize=4096 blocks=130560, imaxpct=25 00:07:27.059 = sunit=0 swidth=0 blks 00:07:27.059 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:27.059 log =internal log bsize=4096 blocks=16384, version=2 00:07:27.059 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:27.059 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.996 Discarding blocks...Done. 00:07:27.996 03:58:42 -- common/autotest_common.sh@931 -- # return 0 00:07:27.996 03:58:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.558 03:58:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.558 03:58:44 -- target/filesystem.sh@25 -- # sync 00:07:30.558 03:58:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.558 03:58:44 -- target/filesystem.sh@27 -- # sync 00:07:30.558 03:58:44 -- target/filesystem.sh@29 -- # i=0 00:07:30.558 03:58:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.558 03:58:44 -- target/filesystem.sh@37 -- # kill -0 3671361 00:07:30.558 03:58:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.558 03:58:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.558 03:58:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.558 03:58:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.558 00:07:30.558 real 0m3.189s 00:07:30.558 user 0m0.031s 00:07:30.558 sys 0m0.062s 00:07:30.558 03:58:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.558 03:58:44 -- common/autotest_common.sh@10 -- # set +x 00:07:30.558 ************************************ 00:07:30.558 END TEST filesystem_in_capsule_xfs 00:07:30.558 ************************************ 00:07:30.558 03:58:44 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.558 03:58:44 -- target/filesystem.sh@93 -- # sync 00:07:30.558 03:58:44 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:30.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.817 03:58:45 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:30.817 03:58:45 -- common/autotest_common.sh@1205 -- # local i=0 00:07:30.817 03:58:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:30.817 03:58:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.817 03:58:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:30.817 03:58:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.817 03:58:45 -- common/autotest_common.sh@1217 -- # return 0 00:07:30.817 03:58:45 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.817 03:58:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:30.817 03:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.817 03:58:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:30.817 03:58:45 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:30.817 03:58:45 -- target/filesystem.sh@101 -- # killprocess 3671361 00:07:30.817 03:58:45 -- common/autotest_common.sh@936 -- # '[' -z 3671361 ']' 00:07:30.817 03:58:45 -- common/autotest_common.sh@940 -- # kill -0 3671361 00:07:30.817 03:58:45 -- common/autotest_common.sh@941 -- # uname 00:07:30.817 03:58:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.817 03:58:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3671361 00:07:30.817 03:58:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:30.817 03:58:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:30.817 03:58:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3671361' 00:07:30.817 killing process with pid 3671361 00:07:30.817 03:58:45 -- common/autotest_common.sh@955 -- # kill 3671361 00:07:30.817 03:58:45 -- common/autotest_common.sh@960 -- # wait 3671361 00:07:31.076 03:58:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:31.076 00:07:31.076 real 0m13.680s 00:07:31.076 user 0m53.543s 00:07:31.076 sys 0m1.444s 00:07:31.076 03:58:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.076 03:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:31.076 ************************************ 00:07:31.076 END TEST nvmf_filesystem_in_capsule 00:07:31.076 ************************************ 00:07:31.335 03:58:45 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:31.335 03:58:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:31.335 03:58:45 -- nvmf/common.sh@117 -- # sync 00:07:31.335 03:58:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.335 03:58:45 -- nvmf/common.sh@120 -- # set +e 00:07:31.335 03:58:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.335 03:58:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.335 rmmod nvme_tcp 00:07:31.335 rmmod nvme_fabrics 00:07:31.335 rmmod nvme_keyring 00:07:31.335 03:58:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.335 03:58:45 -- nvmf/common.sh@124 -- # set -e 00:07:31.335 03:58:45 -- nvmf/common.sh@125 -- # return 0 00:07:31.335 03:58:45 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:31.335 03:58:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:31.335 03:58:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:31.335 03:58:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:31.335 03:58:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.335 03:58:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.336 03:58:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.336 03:58:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.336 03:58:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.242 03:58:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.242 00:07:33.242 real 0m34.978s 00:07:33.242 user 1m44.783s 00:07:33.242 sys 0m7.667s 00:07:33.242 03:58:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.242 03:58:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.242 ************************************ 00:07:33.242 END TEST nvmf_filesystem 00:07:33.242 ************************************ 00:07:33.500 03:58:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.500 03:58:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.500 03:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.500 03:58:47 -- common/autotest_common.sh@10 -- # set +x 00:07:33.500 ************************************ 00:07:33.500 START TEST nvmf_discovery 00:07:33.500 ************************************ 00:07:33.500 03:58:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.500 * Looking for test storage... 00:07:33.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.500 03:58:48 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.500 03:58:48 -- nvmf/common.sh@7 -- # uname -s 00:07:33.500 03:58:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.500 03:58:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.500 03:58:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.500 03:58:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.500 03:58:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.500 03:58:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.500 03:58:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.500 03:58:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.500 03:58:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.500 03:58:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.760 03:58:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:33.760 03:58:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:33.760 03:58:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.760 03:58:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.760 03:58:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.760 03:58:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.760 03:58:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.760 03:58:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.760 03:58:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.760 03:58:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.760 03:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.760 03:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.760 03:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.760 03:58:48 -- paths/export.sh@5 -- # export PATH 00:07:33.760 03:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.760 03:58:48 -- nvmf/common.sh@47 -- # : 0 00:07:33.760 03:58:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.760 03:58:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.760 03:58:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.760 03:58:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.760 03:58:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.760 03:58:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.760 03:58:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.760 03:58:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.760 03:58:48 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:33.760 03:58:48 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:33.760 03:58:48 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:33.760 03:58:48 -- target/discovery.sh@15 -- # hash nvme 00:07:33.760 03:58:48 -- target/discovery.sh@20 -- # nvmftestinit 00:07:33.760 03:58:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:33.760 03:58:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.760 03:58:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:33.760 03:58:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:33.760 03:58:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:33.760 03:58:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.760 03:58:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.760 03:58:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.760 03:58:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:33.760 03:58:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:33.760 03:58:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.760 03:58:48 -- common/autotest_common.sh@10 -- # set +x 00:07:39.034 03:58:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:39.034 03:58:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.034 03:58:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.034 03:58:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.034 03:58:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.034 03:58:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.034 03:58:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.034 03:58:53 -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.034 03:58:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.034 03:58:53 -- nvmf/common.sh@296 -- # e810=() 00:07:39.034 03:58:53 -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.034 03:58:53 -- nvmf/common.sh@297 -- # x722=() 00:07:39.034 03:58:53 -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.034 03:58:53 -- nvmf/common.sh@298 -- # mlx=() 00:07:39.034 03:58:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.034 03:58:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.034 03:58:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.034 03:58:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.034 03:58:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.034 03:58:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:39.034 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:39.034 03:58:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.034 03:58:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:39.034 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:39.034 03:58:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.034 03:58:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.034 03:58:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.034 03:58:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:39.034 Found net devices under 0000:af:00.0: cvl_0_0 00:07:39.034 03:58:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.034 03:58:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.034 03:58:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.034 03:58:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.034 03:58:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:39.034 Found net devices under 0000:af:00.1: cvl_0_1 00:07:39.034 03:58:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.034 03:58:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:39.034 03:58:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:39.034 03:58:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:39.034 03:58:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.034 03:58:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.034 03:58:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.034 03:58:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.034 03:58:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.034 03:58:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.034 03:58:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.034 03:58:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.034 03:58:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.034 03:58:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.034 03:58:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.034 03:58:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.034 03:58:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.034 03:58:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.034 03:58:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.034 03:58:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.034 03:58:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.293 03:58:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.293 03:58:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.293 03:58:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:07:39.293 00:07:39.293 --- 10.0.0.2 ping statistics --- 00:07:39.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.293 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:07:39.293 03:58:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:07:39.293 00:07:39.293 --- 10.0.0.1 ping statistics --- 00:07:39.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.293 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:39.293 03:58:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.293 03:58:53 -- nvmf/common.sh@411 -- # return 0 00:07:39.293 03:58:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:39.293 03:58:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.293 03:58:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:39.293 03:58:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:39.293 03:58:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.293 03:58:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:39.293 03:58:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:39.293 03:58:53 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:39.293 03:58:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:39.293 03:58:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:39.293 03:58:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 03:58:53 -- nvmf/common.sh@470 -- # nvmfpid=3677681 00:07:39.293 03:58:53 -- nvmf/common.sh@471 -- # waitforlisten 3677681 00:07:39.293 03:58:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.293 03:58:53 -- common/autotest_common.sh@817 -- # '[' -z 3677681 ']' 00:07:39.293 03:58:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.293 03:58:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:39.293 03:58:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.293 03:58:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:39.293 03:58:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.293 [2024-04-19 03:58:53.777967] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:39.293 [2024-04-19 03:58:53.778022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.293 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.552 [2024-04-19 03:58:53.864662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.552 [2024-04-19 03:58:53.955508] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.552 [2024-04-19 03:58:53.955550] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.552 [2024-04-19 03:58:53.955560] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.552 [2024-04-19 03:58:53.955569] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.552 [2024-04-19 03:58:53.955576] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.552 [2024-04-19 03:58:53.955836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.552 [2024-04-19 03:58:53.955856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.552 [2024-04-19 03:58:53.955974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.552 [2024-04-19 03:58:53.955974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.488 03:58:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:40.488 03:58:54 -- common/autotest_common.sh@850 -- # return 0 00:07:40.488 03:58:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:40.488 03:58:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:40.488 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 03:58:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.488 03:58:54 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.488 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.488 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 [2024-04-19 03:58:54.754074] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.488 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.488 03:58:54 -- target/discovery.sh@26 -- # seq 1 4 00:07:40.488 03:58:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.488 03:58:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:40.488 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.488 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 Null1 00:07:40.488 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.488 03:58:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:40.488 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.488 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.488 03:58:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:40.488 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.488 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.488 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 [2024-04-19 03:58:54.802405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.489 03:58:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 Null2 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.489 03:58:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 Null3 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.489 03:58:54 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 Null4 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.489 03:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.489 03:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.489 03:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.489 03:58:54 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:40.748 00:07:40.748 Discovery Log Number of Records 6, Generation counter 6 00:07:40.748 =====Discovery Log Entry 0====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: current discovery subsystem 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4420 00:07:40.748 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: explicit discovery connections, duplicate discovery information 00:07:40.748 sectype: none 00:07:40.748 =====Discovery Log Entry 1====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: nvme subsystem 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4420 00:07:40.748 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: none 00:07:40.748 sectype: none 00:07:40.748 =====Discovery Log Entry 2====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: nvme subsystem 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4420 00:07:40.748 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: none 00:07:40.748 sectype: none 00:07:40.748 =====Discovery Log Entry 3====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: nvme subsystem 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4420 00:07:40.748 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: none 00:07:40.748 sectype: none 00:07:40.748 =====Discovery Log Entry 4====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: nvme subsystem 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4420 00:07:40.748 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: none 00:07:40.748 sectype: none 00:07:40.748 =====Discovery Log Entry 5====== 00:07:40.748 trtype: tcp 00:07:40.748 adrfam: ipv4 00:07:40.748 subtype: discovery subsystem referral 00:07:40.748 treq: not required 00:07:40.748 portid: 0 00:07:40.748 trsvcid: 4430 00:07:40.748 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.748 traddr: 10.0.0.2 00:07:40.748 eflags: none 00:07:40.748 sectype: none 00:07:40.748 03:58:55 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:40.748 Perform nvmf subsystem discovery via RPC 00:07:40.748 03:58:55 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:40.748 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.748 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 [2024-04-19 03:58:55.091242] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:40.749 [ 00:07:40.749 { 00:07:40.749 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:40.749 "subtype": "Discovery", 00:07:40.749 "listen_addresses": [ 00:07:40.749 { 00:07:40.749 "transport": "TCP", 00:07:40.749 "trtype": "TCP", 00:07:40.749 "adrfam": "IPv4", 00:07:40.749 "traddr": "10.0.0.2", 00:07:40.749 "trsvcid": "4420" 00:07:40.749 } 00:07:40.749 ], 00:07:40.749 "allow_any_host": true, 00:07:40.749 "hosts": [] 00:07:40.749 }, 00:07:40.749 { 00:07:40.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.749 "subtype": "NVMe", 00:07:40.749 "listen_addresses": [ 00:07:40.749 { 00:07:40.749 "transport": "TCP", 00:07:40.749 "trtype": "TCP", 00:07:40.749 "adrfam": "IPv4", 00:07:40.749 "traddr": "10.0.0.2", 00:07:40.749 "trsvcid": "4420" 00:07:40.749 } 00:07:40.749 ], 00:07:40.749 "allow_any_host": true, 00:07:40.749 "hosts": [], 00:07:40.749 "serial_number": "SPDK00000000000001", 00:07:40.749 "model_number": "SPDK bdev Controller", 00:07:40.749 "max_namespaces": 32, 00:07:40.749 "min_cntlid": 1, 00:07:40.749 "max_cntlid": 65519, 00:07:40.749 "namespaces": [ 00:07:40.749 { 00:07:40.749 "nsid": 1, 00:07:40.749 "bdev_name": "Null1", 00:07:40.749 "name": "Null1", 00:07:40.749 "nguid": "22BE59E1ECCB4114B9554A5F25B31C64", 00:07:40.749 "uuid": "22be59e1-eccb-4114-b955-4a5f25b31c64" 00:07:40.749 } 00:07:40.749 ] 00:07:40.749 }, 00:07:40.749 { 00:07:40.749 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:40.749 "subtype": "NVMe", 00:07:40.749 "listen_addresses": [ 00:07:40.749 { 00:07:40.749 "transport": "TCP", 00:07:40.749 "trtype": "TCP", 00:07:40.749 "adrfam": "IPv4", 00:07:40.749 "traddr": "10.0.0.2", 00:07:40.749 "trsvcid": "4420" 00:07:40.749 } 00:07:40.749 ], 00:07:40.749 "allow_any_host": true, 00:07:40.749 "hosts": [], 00:07:40.749 "serial_number": "SPDK00000000000002", 00:07:40.749 "model_number": "SPDK bdev Controller", 00:07:40.749 "max_namespaces": 32, 00:07:40.749 "min_cntlid": 1, 00:07:40.749 "max_cntlid": 65519, 00:07:40.749 "namespaces": [ 00:07:40.749 { 00:07:40.749 "nsid": 1, 00:07:40.749 "bdev_name": "Null2", 00:07:40.749 "name": "Null2", 00:07:40.749 "nguid": "834CE9AE042B47C0A345D240006D350D", 00:07:40.749 "uuid": "834ce9ae-042b-47c0-a345-d240006d350d" 00:07:40.749 } 00:07:40.749 ] 00:07:40.749 }, 00:07:40.749 { 00:07:40.749 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:40.749 "subtype": "NVMe", 00:07:40.749 "listen_addresses": [ 00:07:40.749 { 00:07:40.749 "transport": "TCP", 00:07:40.749 "trtype": "TCP", 00:07:40.749 "adrfam": "IPv4", 00:07:40.749 "traddr": "10.0.0.2", 00:07:40.749 "trsvcid": "4420" 00:07:40.749 } 00:07:40.749 ], 00:07:40.749 "allow_any_host": true, 00:07:40.749 "hosts": [], 00:07:40.749 "serial_number": "SPDK00000000000003", 00:07:40.749 "model_number": "SPDK bdev Controller", 00:07:40.749 "max_namespaces": 32, 00:07:40.749 "min_cntlid": 1, 00:07:40.749 "max_cntlid": 65519, 00:07:40.749 "namespaces": [ 00:07:40.749 { 00:07:40.749 "nsid": 1, 00:07:40.749 "bdev_name": "Null3", 00:07:40.749 "name": "Null3", 00:07:40.749 "nguid": "05E120B1190B42C8B644D8935FF414BE", 00:07:40.749 "uuid": "05e120b1-190b-42c8-b644-d8935ff414be" 00:07:40.749 } 00:07:40.749 ] 00:07:40.749 }, 00:07:40.749 { 00:07:40.749 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:40.749 "subtype": "NVMe", 00:07:40.749 "listen_addresses": [ 00:07:40.749 { 00:07:40.749 "transport": "TCP", 00:07:40.749 "trtype": "TCP", 00:07:40.749 "adrfam": "IPv4", 00:07:40.749 "traddr": "10.0.0.2", 00:07:40.749 "trsvcid": "4420" 00:07:40.749 } 00:07:40.749 ], 00:07:40.749 "allow_any_host": true, 00:07:40.749 "hosts": [], 00:07:40.749 "serial_number": "SPDK00000000000004", 00:07:40.749 "model_number": "SPDK bdev Controller", 00:07:40.749 "max_namespaces": 32, 00:07:40.749 "min_cntlid": 1, 00:07:40.749 "max_cntlid": 65519, 00:07:40.749 "namespaces": [ 00:07:40.749 { 00:07:40.749 "nsid": 1, 00:07:40.749 "bdev_name": "Null4", 00:07:40.749 "name": "Null4", 00:07:40.749 "nguid": "974CEF564E06448896DE0D8DEB9ED2A3", 00:07:40.749 "uuid": "974cef56-4e06-4488-96de-0d8deb9ed2a3" 00:07:40.749 } 00:07:40.749 ] 00:07:40.749 } 00:07:40.749 ] 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@42 -- # seq 1 4 00:07:40.749 03:58:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.749 03:58:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.749 03:58:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.749 03:58:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.749 03:58:55 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:40.749 03:58:55 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:40.749 03:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:40.749 03:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:40.749 03:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:40.749 03:58:55 -- target/discovery.sh@49 -- # check_bdevs= 00:07:40.749 03:58:55 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:40.749 03:58:55 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:40.749 03:58:55 -- target/discovery.sh@57 -- # nvmftestfini 00:07:40.749 03:58:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:40.749 03:58:55 -- nvmf/common.sh@117 -- # sync 00:07:40.749 03:58:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.749 03:58:55 -- nvmf/common.sh@120 -- # set +e 00:07:40.749 03:58:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.749 03:58:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.749 rmmod nvme_tcp 00:07:40.749 rmmod nvme_fabrics 00:07:40.749 rmmod nvme_keyring 00:07:41.008 03:58:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.008 03:58:55 -- nvmf/common.sh@124 -- # set -e 00:07:41.008 03:58:55 -- nvmf/common.sh@125 -- # return 0 00:07:41.008 03:58:55 -- nvmf/common.sh@478 -- # '[' -n 3677681 ']' 00:07:41.008 03:58:55 -- nvmf/common.sh@479 -- # killprocess 3677681 00:07:41.008 03:58:55 -- common/autotest_common.sh@936 -- # '[' -z 3677681 ']' 00:07:41.008 03:58:55 -- common/autotest_common.sh@940 -- # kill -0 3677681 00:07:41.008 03:58:55 -- common/autotest_common.sh@941 -- # uname 00:07:41.008 03:58:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:41.008 03:58:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3677681 00:07:41.008 03:58:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:41.008 03:58:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:41.008 03:58:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3677681' 00:07:41.008 killing process with pid 3677681 00:07:41.008 03:58:55 -- common/autotest_common.sh@955 -- # kill 3677681 00:07:41.008 [2024-04-19 03:58:55.335641] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:41.008 03:58:55 -- common/autotest_common.sh@960 -- # wait 3677681 00:07:41.267 03:58:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:41.267 03:58:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:41.267 03:58:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:41.267 03:58:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.267 03:58:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.267 03:58:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.267 03:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.267 03:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.174 03:58:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.174 00:07:43.174 real 0m9.698s 00:07:43.174 user 0m8.112s 00:07:43.174 sys 0m4.643s 00:07:43.174 03:58:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.174 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.174 ************************************ 00:07:43.174 END TEST nvmf_discovery 00:07:43.174 ************************************ 00:07:43.174 03:58:57 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:43.174 03:58:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.174 03:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.174 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.433 ************************************ 00:07:43.433 START TEST nvmf_referrals 00:07:43.433 ************************************ 00:07:43.433 03:58:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:43.433 * Looking for test storage... 00:07:43.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.433 03:58:57 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.433 03:58:57 -- nvmf/common.sh@7 -- # uname -s 00:07:43.433 03:58:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.433 03:58:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.433 03:58:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.433 03:58:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.433 03:58:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.433 03:58:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.433 03:58:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.433 03:58:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.433 03:58:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.433 03:58:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.433 03:58:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:43.433 03:58:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:43.433 03:58:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.433 03:58:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.433 03:58:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.433 03:58:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.433 03:58:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.433 03:58:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.433 03:58:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.433 03:58:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.433 03:58:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.434 03:58:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.434 03:58:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.434 03:58:57 -- paths/export.sh@5 -- # export PATH 00:07:43.434 03:58:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.434 03:58:57 -- nvmf/common.sh@47 -- # : 0 00:07:43.434 03:58:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.434 03:58:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.434 03:58:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.434 03:58:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.434 03:58:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.434 03:58:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.434 03:58:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.434 03:58:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.434 03:58:57 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:43.434 03:58:57 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:43.434 03:58:57 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:43.434 03:58:57 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:43.434 03:58:57 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:43.434 03:58:57 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:43.434 03:58:57 -- target/referrals.sh@37 -- # nvmftestinit 00:07:43.434 03:58:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:43.434 03:58:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.434 03:58:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:43.434 03:58:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:43.434 03:58:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:43.434 03:58:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.434 03:58:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.434 03:58:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.434 03:58:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:43.434 03:58:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:43.434 03:58:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.434 03:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:50.001 03:59:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:50.001 03:59:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.001 03:59:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.001 03:59:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.001 03:59:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.001 03:59:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.001 03:59:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.001 03:59:03 -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.001 03:59:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.001 03:59:03 -- nvmf/common.sh@296 -- # e810=() 00:07:50.001 03:59:03 -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.001 03:59:03 -- nvmf/common.sh@297 -- # x722=() 00:07:50.001 03:59:03 -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.001 03:59:03 -- nvmf/common.sh@298 -- # mlx=() 00:07:50.001 03:59:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.001 03:59:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.001 03:59:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.001 03:59:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:50.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:50.001 03:59:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.001 03:59:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:50.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:50.001 03:59:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.001 03:59:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.001 03:59:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.001 03:59:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:50.001 Found net devices under 0000:af:00.0: cvl_0_0 00:07:50.001 03:59:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.001 03:59:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.001 03:59:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.001 03:59:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:50.001 Found net devices under 0000:af:00.1: cvl_0_1 00:07:50.001 03:59:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:50.001 03:59:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:50.001 03:59:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.001 03:59:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.001 03:59:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.001 03:59:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.001 03:59:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.001 03:59:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.001 03:59:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.001 03:59:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.001 03:59:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.001 03:59:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.001 03:59:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.001 03:59:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.001 03:59:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.001 03:59:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.001 03:59:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.001 03:59:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.001 03:59:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.001 03:59:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.001 03:59:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:07:50.001 00:07:50.001 --- 10.0.0.2 ping statistics --- 00:07:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.001 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:50.001 03:59:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:07:50.001 00:07:50.001 --- 10.0.0.1 ping statistics --- 00:07:50.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.001 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:50.001 03:59:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.001 03:59:03 -- nvmf/common.sh@411 -- # return 0 00:07:50.001 03:59:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:50.001 03:59:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.001 03:59:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:50.001 03:59:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.001 03:59:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:50.001 03:59:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:50.001 03:59:03 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:50.001 03:59:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:50.001 03:59:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:50.001 03:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:50.001 03:59:03 -- nvmf/common.sh@470 -- # nvmfpid=3681705 00:07:50.001 03:59:03 -- nvmf/common.sh@471 -- # waitforlisten 3681705 00:07:50.001 03:59:03 -- common/autotest_common.sh@817 -- # '[' -z 3681705 ']' 00:07:50.001 03:59:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.001 03:59:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:50.001 03:59:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.001 03:59:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:50.001 03:59:03 -- common/autotest_common.sh@10 -- # set +x 00:07:50.001 03:59:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.001 [2024-04-19 03:59:03.800374] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:07:50.001 [2024-04-19 03:59:03.800430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.002 [2024-04-19 03:59:03.887416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.002 [2024-04-19 03:59:03.978083] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.002 [2024-04-19 03:59:03.978126] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.002 [2024-04-19 03:59:03.978136] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.002 [2024-04-19 03:59:03.978145] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.002 [2024-04-19 03:59:03.978152] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.002 [2024-04-19 03:59:03.978204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.002 [2024-04-19 03:59:03.978303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.002 [2024-04-19 03:59:03.978404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.002 [2024-04-19 03:59:03.978404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.260 03:59:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:50.260 03:59:04 -- common/autotest_common.sh@850 -- # return 0 00:07:50.260 03:59:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:50.260 03:59:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:50.260 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.260 03:59:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.260 03:59:04 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.260 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.260 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.260 [2024-04-19 03:59:04.772829] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.260 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.260 03:59:04 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:50.260 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.260 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 [2024-04-19 03:59:04.789033] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:50.518 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.518 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:50.518 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.518 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:50.518 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.518 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.518 03:59:04 -- target/referrals.sh@48 -- # jq length 00:07:50.518 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.518 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:50.518 03:59:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:50.518 03:59:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:50.518 03:59:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.518 03:59:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:50.518 03:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.518 03:59:04 -- target/referrals.sh@21 -- # sort 00:07:50.518 03:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:50.518 03:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:50.518 03:59:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:50.518 03:59:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:50.518 03:59:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.518 03:59:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.518 03:59:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.518 03:59:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.518 03:59:04 -- target/referrals.sh@26 -- # sort 00:07:50.776 03:59:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:50.776 03:59:05 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:50.776 03:59:05 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:50.776 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.776 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.776 03:59:05 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:50.776 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.776 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.776 03:59:05 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:50.776 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.776 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.776 03:59:05 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.776 03:59:05 -- target/referrals.sh@56 -- # jq length 00:07:50.776 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.777 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.777 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.777 03:59:05 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:50.777 03:59:05 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:50.777 03:59:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.777 03:59:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.777 03:59:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.777 03:59:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.777 03:59:05 -- target/referrals.sh@26 -- # sort 00:07:50.777 03:59:05 -- target/referrals.sh@26 -- # echo 00:07:50.777 03:59:05 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:50.777 03:59:05 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:50.777 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.777 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.777 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.777 03:59:05 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:50.777 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.777 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.777 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.777 03:59:05 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:50.777 03:59:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:50.777 03:59:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.777 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.777 03:59:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:50.777 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.777 03:59:05 -- target/referrals.sh@21 -- # sort 00:07:50.777 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.035 03:59:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:51.035 03:59:05 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:51.035 03:59:05 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:51.035 03:59:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.035 03:59:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.035 03:59:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.035 03:59:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.035 03:59:05 -- target/referrals.sh@26 -- # sort 00:07:51.035 03:59:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:51.035 03:59:05 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:51.035 03:59:05 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:51.035 03:59:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:51.035 03:59:05 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:51.035 03:59:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.035 03:59:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:51.294 03:59:05 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:51.294 03:59:05 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:51.294 03:59:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:51.294 03:59:05 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:51.294 03:59:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.294 03:59:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:51.294 03:59:05 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:51.294 03:59:05 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:51.294 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.294 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.294 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.294 03:59:05 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:51.294 03:59:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:51.294 03:59:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.294 03:59:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:51.294 03:59:05 -- target/referrals.sh@21 -- # sort 00:07:51.294 03:59:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.294 03:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:51.294 03:59:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.294 03:59:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:51.294 03:59:05 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:51.294 03:59:05 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:51.294 03:59:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.294 03:59:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.294 03:59:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.294 03:59:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.294 03:59:05 -- target/referrals.sh@26 -- # sort 00:07:51.552 03:59:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:51.552 03:59:05 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:51.552 03:59:05 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:51.552 03:59:05 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:51.552 03:59:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:51.552 03:59:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.552 03:59:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:51.811 03:59:06 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:51.811 03:59:06 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:51.811 03:59:06 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:51.811 03:59:06 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:51.811 03:59:06 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.811 03:59:06 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:51.811 03:59:06 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:51.811 03:59:06 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:51.811 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.811 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.811 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.811 03:59:06 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:51.811 03:59:06 -- target/referrals.sh@82 -- # jq length 00:07:51.811 03:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.811 03:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.811 03:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.811 03:59:06 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:51.811 03:59:06 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:51.811 03:59:06 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:51.811 03:59:06 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:51.811 03:59:06 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:51.811 03:59:06 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:51.811 03:59:06 -- target/referrals.sh@26 -- # sort 00:07:52.070 03:59:06 -- target/referrals.sh@26 -- # echo 00:07:52.070 03:59:06 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:52.070 03:59:06 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:52.070 03:59:06 -- target/referrals.sh@86 -- # nvmftestfini 00:07:52.070 03:59:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:52.070 03:59:06 -- nvmf/common.sh@117 -- # sync 00:07:52.070 03:59:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.070 03:59:06 -- nvmf/common.sh@120 -- # set +e 00:07:52.070 03:59:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.070 03:59:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.070 rmmod nvme_tcp 00:07:52.070 rmmod nvme_fabrics 00:07:52.070 rmmod nvme_keyring 00:07:52.070 03:59:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.070 03:59:06 -- nvmf/common.sh@124 -- # set -e 00:07:52.070 03:59:06 -- nvmf/common.sh@125 -- # return 0 00:07:52.070 03:59:06 -- nvmf/common.sh@478 -- # '[' -n 3681705 ']' 00:07:52.070 03:59:06 -- nvmf/common.sh@479 -- # killprocess 3681705 00:07:52.070 03:59:06 -- common/autotest_common.sh@936 -- # '[' -z 3681705 ']' 00:07:52.070 03:59:06 -- common/autotest_common.sh@940 -- # kill -0 3681705 00:07:52.070 03:59:06 -- common/autotest_common.sh@941 -- # uname 00:07:52.070 03:59:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:52.070 03:59:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3681705 00:07:52.071 03:59:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:52.071 03:59:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:52.071 03:59:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3681705' 00:07:52.071 killing process with pid 3681705 00:07:52.071 03:59:06 -- common/autotest_common.sh@955 -- # kill 3681705 00:07:52.071 03:59:06 -- common/autotest_common.sh@960 -- # wait 3681705 00:07:52.330 03:59:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:52.330 03:59:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:52.330 03:59:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:52.330 03:59:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.330 03:59:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.330 03:59:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.330 03:59:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.330 03:59:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.867 03:59:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.867 00:07:54.867 real 0m10.965s 00:07:54.867 user 0m13.372s 00:07:54.867 sys 0m5.128s 00:07:54.867 03:59:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.867 03:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:54.867 ************************************ 00:07:54.867 END TEST nvmf_referrals 00:07:54.867 ************************************ 00:07:54.867 03:59:08 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:54.867 03:59:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.867 03:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.867 03:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:54.867 ************************************ 00:07:54.867 START TEST nvmf_connect_disconnect 00:07:54.867 ************************************ 00:07:54.867 03:59:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:54.867 * Looking for test storage... 00:07:54.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.867 03:59:09 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.867 03:59:09 -- nvmf/common.sh@7 -- # uname -s 00:07:54.867 03:59:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.867 03:59:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.867 03:59:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.867 03:59:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.867 03:59:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.867 03:59:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.867 03:59:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.867 03:59:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.867 03:59:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.867 03:59:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.867 03:59:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:54.867 03:59:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:54.867 03:59:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.867 03:59:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.867 03:59:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.867 03:59:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.867 03:59:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.867 03:59:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.867 03:59:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.867 03:59:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.867 03:59:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.867 03:59:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.867 03:59:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.867 03:59:09 -- paths/export.sh@5 -- # export PATH 00:07:54.867 03:59:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.867 03:59:09 -- nvmf/common.sh@47 -- # : 0 00:07:54.867 03:59:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.867 03:59:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.868 03:59:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.868 03:59:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.868 03:59:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.868 03:59:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.868 03:59:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.868 03:59:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.868 03:59:09 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.868 03:59:09 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.868 03:59:09 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:54.868 03:59:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:54.868 03:59:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.868 03:59:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:54.868 03:59:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:54.868 03:59:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:54.868 03:59:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.868 03:59:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.868 03:59:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.868 03:59:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:54.868 03:59:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:54.868 03:59:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.868 03:59:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.147 03:59:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:00.147 03:59:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.147 03:59:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.147 03:59:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.147 03:59:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.147 03:59:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.147 03:59:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.147 03:59:14 -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.147 03:59:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.147 03:59:14 -- nvmf/common.sh@296 -- # e810=() 00:08:00.147 03:59:14 -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.147 03:59:14 -- nvmf/common.sh@297 -- # x722=() 00:08:00.147 03:59:14 -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.147 03:59:14 -- nvmf/common.sh@298 -- # mlx=() 00:08:00.147 03:59:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.147 03:59:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.147 03:59:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.147 03:59:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.147 03:59:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.147 03:59:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:00.147 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:00.147 03:59:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.147 03:59:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:00.147 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:00.147 03:59:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.147 03:59:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.147 03:59:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.147 03:59:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:00.147 Found net devices under 0000:af:00.0: cvl_0_0 00:08:00.147 03:59:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.147 03:59:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.147 03:59:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.147 03:59:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.147 03:59:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:00.147 Found net devices under 0000:af:00.1: cvl_0_1 00:08:00.147 03:59:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.147 03:59:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:00.147 03:59:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:00.147 03:59:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:00.147 03:59:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.147 03:59:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.147 03:59:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.147 03:59:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.147 03:59:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.147 03:59:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.147 03:59:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.147 03:59:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.147 03:59:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.147 03:59:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.148 03:59:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.148 03:59:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.148 03:59:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.148 03:59:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.148 03:59:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.148 03:59:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.148 03:59:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.148 03:59:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.148 03:59:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.148 03:59:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:08:00.148 00:08:00.148 --- 10.0.0.2 ping statistics --- 00:08:00.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.148 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:00.148 03:59:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:00.148 00:08:00.148 --- 10.0.0.1 ping statistics --- 00:08:00.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.148 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:00.148 03:59:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.148 03:59:14 -- nvmf/common.sh@411 -- # return 0 00:08:00.148 03:59:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:00.148 03:59:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.148 03:59:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:00.148 03:59:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:00.148 03:59:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.148 03:59:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:00.148 03:59:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:00.148 03:59:14 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:00.148 03:59:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:00.148 03:59:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:00.148 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 03:59:14 -- nvmf/common.sh@470 -- # nvmfpid=3685804 00:08:00.148 03:59:14 -- nvmf/common.sh@471 -- # waitforlisten 3685804 00:08:00.148 03:59:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.148 03:59:14 -- common/autotest_common.sh@817 -- # '[' -z 3685804 ']' 00:08:00.148 03:59:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.148 03:59:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:00.148 03:59:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.148 03:59:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:00.148 03:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:00.148 [2024-04-19 03:59:14.553727] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:00.148 [2024-04-19 03:59:14.553785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.148 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.148 [2024-04-19 03:59:14.639498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.445 [2024-04-19 03:59:14.727975] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.445 [2024-04-19 03:59:14.728019] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.445 [2024-04-19 03:59:14.728030] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.445 [2024-04-19 03:59:14.728039] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.445 [2024-04-19 03:59:14.728046] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.445 [2024-04-19 03:59:14.728094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.445 [2024-04-19 03:59:14.728195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.445 [2024-04-19 03:59:14.728290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.445 [2024-04-19 03:59:14.728291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.014 03:59:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:01.014 03:59:15 -- common/autotest_common.sh@850 -- # return 0 00:08:01.014 03:59:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:01.014 03:59:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:01.014 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.014 03:59:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.014 03:59:15 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:01.014 03:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.014 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.014 [2024-04-19 03:59:15.528200] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.014 03:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.014 03:59:15 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:01.014 03:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.014 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.274 03:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.274 03:59:15 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:01.274 03:59:15 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:01.274 03:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.274 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.274 03:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.274 03:59:15 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.274 03:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.274 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.274 03:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.274 03:59:15 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.274 03:59:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:01.274 03:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:01.275 [2024-04-19 03:59:15.584222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.275 03:59:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:01.275 03:59:15 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:01.275 03:59:15 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:01.275 03:59:15 -- target/connect_disconnect.sh@34 -- # set +x 00:08:05.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.763 03:59:32 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:18.763 03:59:32 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:18.763 03:59:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:18.763 03:59:32 -- nvmf/common.sh@117 -- # sync 00:08:18.763 03:59:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.763 03:59:32 -- nvmf/common.sh@120 -- # set +e 00:08:18.763 03:59:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.763 03:59:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.764 rmmod nvme_tcp 00:08:18.764 rmmod nvme_fabrics 00:08:18.764 rmmod nvme_keyring 00:08:18.764 03:59:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.764 03:59:32 -- nvmf/common.sh@124 -- # set -e 00:08:18.764 03:59:32 -- nvmf/common.sh@125 -- # return 0 00:08:18.764 03:59:32 -- nvmf/common.sh@478 -- # '[' -n 3685804 ']' 00:08:18.764 03:59:32 -- nvmf/common.sh@479 -- # killprocess 3685804 00:08:18.764 03:59:32 -- common/autotest_common.sh@936 -- # '[' -z 3685804 ']' 00:08:18.764 03:59:32 -- common/autotest_common.sh@940 -- # kill -0 3685804 00:08:18.764 03:59:32 -- common/autotest_common.sh@941 -- # uname 00:08:18.764 03:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.764 03:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3685804 00:08:18.764 03:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:18.764 03:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:18.764 03:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3685804' 00:08:18.764 killing process with pid 3685804 00:08:18.764 03:59:32 -- common/autotest_common.sh@955 -- # kill 3685804 00:08:18.764 03:59:32 -- common/autotest_common.sh@960 -- # wait 3685804 00:08:18.764 03:59:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:18.764 03:59:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:18.764 03:59:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:18.764 03:59:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.764 03:59:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.764 03:59:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.764 03:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.764 03:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.295 03:59:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.295 00:08:21.295 real 0m26.259s 00:08:21.295 user 1m14.598s 00:08:21.295 sys 0m5.337s 00:08:21.295 03:59:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.295 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.295 ************************************ 00:08:21.295 END TEST nvmf_connect_disconnect 00:08:21.295 ************************************ 00:08:21.295 03:59:35 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:21.295 03:59:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:21.295 03:59:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.295 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:21.295 ************************************ 00:08:21.295 START TEST nvmf_multitarget 00:08:21.295 ************************************ 00:08:21.295 03:59:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:21.295 * Looking for test storage... 00:08:21.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.295 03:59:35 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.295 03:59:35 -- nvmf/common.sh@7 -- # uname -s 00:08:21.295 03:59:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.295 03:59:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.295 03:59:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.295 03:59:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.295 03:59:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.295 03:59:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.295 03:59:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.295 03:59:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.295 03:59:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.295 03:59:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.295 03:59:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:21.295 03:59:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:21.295 03:59:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.295 03:59:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.295 03:59:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.295 03:59:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.295 03:59:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.295 03:59:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.295 03:59:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.295 03:59:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.295 03:59:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.295 03:59:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.295 03:59:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.295 03:59:35 -- paths/export.sh@5 -- # export PATH 00:08:21.295 03:59:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.295 03:59:35 -- nvmf/common.sh@47 -- # : 0 00:08:21.295 03:59:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.295 03:59:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.295 03:59:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.295 03:59:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.295 03:59:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.295 03:59:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.295 03:59:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.295 03:59:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.295 03:59:35 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:21.295 03:59:35 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:21.295 03:59:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:21.295 03:59:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.295 03:59:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:21.295 03:59:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:21.295 03:59:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:21.295 03:59:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.295 03:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.295 03:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.295 03:59:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:21.295 03:59:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:21.295 03:59:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.295 03:59:35 -- common/autotest_common.sh@10 -- # set +x 00:08:26.572 03:59:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.572 03:59:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.572 03:59:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.572 03:59:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.572 03:59:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.572 03:59:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.572 03:59:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.572 03:59:41 -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.572 03:59:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.572 03:59:41 -- nvmf/common.sh@296 -- # e810=() 00:08:26.572 03:59:41 -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.572 03:59:41 -- nvmf/common.sh@297 -- # x722=() 00:08:26.572 03:59:41 -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.572 03:59:41 -- nvmf/common.sh@298 -- # mlx=() 00:08:26.572 03:59:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.572 03:59:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.572 03:59:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.572 03:59:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.572 03:59:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.572 03:59:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:26.572 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:26.572 03:59:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.572 03:59:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:26.572 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:26.572 03:59:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.572 03:59:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.572 03:59:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.572 03:59:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:26.572 Found net devices under 0000:af:00.0: cvl_0_0 00:08:26.572 03:59:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.572 03:59:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.572 03:59:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.572 03:59:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.572 03:59:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:26.572 Found net devices under 0000:af:00.1: cvl_0_1 00:08:26.572 03:59:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.572 03:59:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:26.572 03:59:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:26.572 03:59:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:26.572 03:59:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.572 03:59:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.572 03:59:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.572 03:59:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.572 03:59:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.572 03:59:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.572 03:59:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.572 03:59:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.572 03:59:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.572 03:59:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.572 03:59:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.572 03:59:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.572 03:59:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.832 03:59:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.832 03:59:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.832 03:59:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.832 03:59:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.832 03:59:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.832 03:59:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.832 03:59:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:26.832 00:08:26.832 --- 10.0.0.2 ping statistics --- 00:08:26.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.832 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:26.832 03:59:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:26.832 00:08:26.832 --- 10.0.0.1 ping statistics --- 00:08:26.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.832 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:26.832 03:59:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.832 03:59:41 -- nvmf/common.sh@411 -- # return 0 00:08:26.832 03:59:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:26.832 03:59:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.832 03:59:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:26.832 03:59:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:26.832 03:59:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.832 03:59:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:26.832 03:59:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:27.091 03:59:41 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:27.091 03:59:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:27.091 03:59:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:27.091 03:59:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.091 03:59:41 -- nvmf/common.sh@470 -- # nvmfpid=3693055 00:08:27.091 03:59:41 -- nvmf/common.sh@471 -- # waitforlisten 3693055 00:08:27.091 03:59:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.091 03:59:41 -- common/autotest_common.sh@817 -- # '[' -z 3693055 ']' 00:08:27.091 03:59:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.091 03:59:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.091 03:59:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.091 03:59:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.091 03:59:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.091 [2024-04-19 03:59:41.445292] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:27.091 [2024-04-19 03:59:41.445355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.091 [2024-04-19 03:59:41.530785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.350 [2024-04-19 03:59:41.620274] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.350 [2024-04-19 03:59:41.620315] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.350 [2024-04-19 03:59:41.620326] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.350 [2024-04-19 03:59:41.620334] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.350 [2024-04-19 03:59:41.620341] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.350 [2024-04-19 03:59:41.620401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.350 [2024-04-19 03:59:41.620501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.350 [2024-04-19 03:59:41.620616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.350 [2024-04-19 03:59:41.620616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.918 03:59:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:27.918 03:59:42 -- common/autotest_common.sh@850 -- # return 0 00:08:27.918 03:59:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:27.918 03:59:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:27.918 03:59:42 -- common/autotest_common.sh@10 -- # set +x 00:08:27.918 03:59:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.918 03:59:42 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:27.918 03:59:42 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:27.918 03:59:42 -- target/multitarget.sh@21 -- # jq length 00:08:28.175 03:59:42 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:28.175 03:59:42 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:28.175 "nvmf_tgt_1" 00:08:28.175 03:59:42 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:28.434 "nvmf_tgt_2" 00:08:28.434 03:59:42 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:28.434 03:59:42 -- target/multitarget.sh@28 -- # jq length 00:08:28.434 03:59:42 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:28.434 03:59:42 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:28.693 true 00:08:28.693 03:59:43 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:28.693 true 00:08:28.693 03:59:43 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:28.693 03:59:43 -- target/multitarget.sh@35 -- # jq length 00:08:28.952 03:59:43 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:28.952 03:59:43 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:28.952 03:59:43 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:28.952 03:59:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:28.952 03:59:43 -- nvmf/common.sh@117 -- # sync 00:08:28.952 03:59:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.952 03:59:43 -- nvmf/common.sh@120 -- # set +e 00:08:28.952 03:59:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.952 03:59:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.952 rmmod nvme_tcp 00:08:28.952 rmmod nvme_fabrics 00:08:28.952 rmmod nvme_keyring 00:08:28.952 03:59:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.952 03:59:43 -- nvmf/common.sh@124 -- # set -e 00:08:28.952 03:59:43 -- nvmf/common.sh@125 -- # return 0 00:08:28.952 03:59:43 -- nvmf/common.sh@478 -- # '[' -n 3693055 ']' 00:08:28.952 03:59:43 -- nvmf/common.sh@479 -- # killprocess 3693055 00:08:28.952 03:59:43 -- common/autotest_common.sh@936 -- # '[' -z 3693055 ']' 00:08:28.952 03:59:43 -- common/autotest_common.sh@940 -- # kill -0 3693055 00:08:28.952 03:59:43 -- common/autotest_common.sh@941 -- # uname 00:08:28.952 03:59:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:28.952 03:59:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3693055 00:08:28.952 03:59:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:28.952 03:59:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:28.952 03:59:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3693055' 00:08:28.952 killing process with pid 3693055 00:08:28.952 03:59:43 -- common/autotest_common.sh@955 -- # kill 3693055 00:08:28.952 03:59:43 -- common/autotest_common.sh@960 -- # wait 3693055 00:08:29.212 03:59:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:29.212 03:59:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:29.212 03:59:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:29.212 03:59:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.212 03:59:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.212 03:59:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.212 03:59:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.212 03:59:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.790 03:59:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.790 00:08:31.790 real 0m10.363s 00:08:31.790 user 0m10.787s 00:08:31.790 sys 0m4.939s 00:08:31.790 03:59:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.790 03:59:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.790 ************************************ 00:08:31.790 END TEST nvmf_multitarget 00:08:31.790 ************************************ 00:08:31.790 03:59:45 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:31.790 03:59:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:31.790 03:59:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.790 03:59:45 -- common/autotest_common.sh@10 -- # set +x 00:08:31.790 ************************************ 00:08:31.790 START TEST nvmf_rpc 00:08:31.790 ************************************ 00:08:31.790 03:59:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:31.790 * Looking for test storage... 00:08:31.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.790 03:59:46 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.790 03:59:46 -- nvmf/common.sh@7 -- # uname -s 00:08:31.790 03:59:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.790 03:59:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.790 03:59:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.790 03:59:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.790 03:59:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.790 03:59:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.790 03:59:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.790 03:59:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.790 03:59:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.790 03:59:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.790 03:59:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:31.790 03:59:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:31.790 03:59:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.790 03:59:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.790 03:59:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.790 03:59:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.790 03:59:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.790 03:59:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.790 03:59:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.790 03:59:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.790 03:59:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.790 03:59:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.790 03:59:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.790 03:59:46 -- paths/export.sh@5 -- # export PATH 00:08:31.790 03:59:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.790 03:59:46 -- nvmf/common.sh@47 -- # : 0 00:08:31.790 03:59:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.790 03:59:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.790 03:59:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.790 03:59:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.790 03:59:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.790 03:59:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.790 03:59:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.790 03:59:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.790 03:59:46 -- target/rpc.sh@11 -- # loops=5 00:08:31.790 03:59:46 -- target/rpc.sh@23 -- # nvmftestinit 00:08:31.790 03:59:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:31.790 03:59:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.790 03:59:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:31.790 03:59:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:31.790 03:59:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:31.790 03:59:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.790 03:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.790 03:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.790 03:59:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:31.790 03:59:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:31.790 03:59:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.790 03:59:46 -- common/autotest_common.sh@10 -- # set +x 00:08:37.083 03:59:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:37.083 03:59:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.083 03:59:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.083 03:59:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.083 03:59:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.083 03:59:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.083 03:59:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.083 03:59:50 -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.083 03:59:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.083 03:59:50 -- nvmf/common.sh@296 -- # e810=() 00:08:37.083 03:59:50 -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.083 03:59:50 -- nvmf/common.sh@297 -- # x722=() 00:08:37.083 03:59:50 -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.083 03:59:50 -- nvmf/common.sh@298 -- # mlx=() 00:08:37.083 03:59:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.084 03:59:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.084 03:59:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.084 03:59:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.084 03:59:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.084 03:59:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:37.084 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:37.084 03:59:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.084 03:59:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:37.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:37.084 03:59:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.084 03:59:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.084 03:59:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.084 03:59:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:37.084 Found net devices under 0000:af:00.0: cvl_0_0 00:08:37.084 03:59:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.084 03:59:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.084 03:59:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.084 03:59:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.084 03:59:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:37.084 Found net devices under 0000:af:00.1: cvl_0_1 00:08:37.084 03:59:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.084 03:59:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:37.084 03:59:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:37.084 03:59:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:37.084 03:59:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.084 03:59:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.084 03:59:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.084 03:59:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.084 03:59:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.084 03:59:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.084 03:59:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.084 03:59:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.084 03:59:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.084 03:59:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.084 03:59:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.084 03:59:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.084 03:59:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.084 03:59:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.084 03:59:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.084 03:59:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.084 03:59:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.084 03:59:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.084 03:59:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.084 03:59:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:08:37.084 00:08:37.084 --- 10.0.0.2 ping statistics --- 00:08:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.084 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:08:37.084 03:59:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:37.084 00:08:37.084 --- 10.0.0.1 ping statistics --- 00:08:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.084 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:37.084 03:59:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.084 03:59:51 -- nvmf/common.sh@411 -- # return 0 00:08:37.084 03:59:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:37.084 03:59:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.084 03:59:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:37.084 03:59:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:37.084 03:59:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.084 03:59:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:37.084 03:59:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:37.084 03:59:51 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:37.084 03:59:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:37.084 03:59:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:37.084 03:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:37.084 03:59:51 -- nvmf/common.sh@470 -- # nvmfpid=3696869 00:08:37.084 03:59:51 -- nvmf/common.sh@471 -- # waitforlisten 3696869 00:08:37.084 03:59:51 -- common/autotest_common.sh@817 -- # '[' -z 3696869 ']' 00:08:37.084 03:59:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.084 03:59:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:37.084 03:59:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:37.084 03:59:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.084 03:59:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:37.084 03:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:37.084 [2024-04-19 03:59:51.251414] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:08:37.084 [2024-04-19 03:59:51.251471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.084 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.084 [2024-04-19 03:59:51.336368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.084 [2024-04-19 03:59:51.427223] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.084 [2024-04-19 03:59:51.427267] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.084 [2024-04-19 03:59:51.427277] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.084 [2024-04-19 03:59:51.427286] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.084 [2024-04-19 03:59:51.427293] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.084 [2024-04-19 03:59:51.427350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.084 [2024-04-19 03:59:51.427448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.084 [2024-04-19 03:59:51.427561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.084 [2024-04-19 03:59:51.427563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.021 03:59:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:38.021 03:59:52 -- common/autotest_common.sh@850 -- # return 0 00:08:38.021 03:59:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:38.021 03:59:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.021 03:59:52 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@26 -- # stats='{ 00:08:38.021 "tick_rate": 2200000000, 00:08:38.021 "poll_groups": [ 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_0", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_1", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_2", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_3", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [] 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 }' 00:08:38.021 03:59:52 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:38.021 03:59:52 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:38.021 03:59:52 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:38.021 03:59:52 -- target/rpc.sh@15 -- # wc -l 00:08:38.021 03:59:52 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:38.021 03:59:52 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:38.021 03:59:52 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:38.021 03:59:52 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 [2024-04-19 03:59:52.339575] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@33 -- # stats='{ 00:08:38.021 "tick_rate": 2200000000, 00:08:38.021 "poll_groups": [ 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_0", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [ 00:08:38.021 { 00:08:38.021 "trtype": "TCP" 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_1", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [ 00:08:38.021 { 00:08:38.021 "trtype": "TCP" 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_2", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [ 00:08:38.021 { 00:08:38.021 "trtype": "TCP" 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 }, 00:08:38.021 { 00:08:38.021 "name": "nvmf_tgt_poll_group_3", 00:08:38.021 "admin_qpairs": 0, 00:08:38.021 "io_qpairs": 0, 00:08:38.021 "current_admin_qpairs": 0, 00:08:38.021 "current_io_qpairs": 0, 00:08:38.021 "pending_bdev_io": 0, 00:08:38.021 "completed_nvme_io": 0, 00:08:38.021 "transports": [ 00:08:38.021 { 00:08:38.021 "trtype": "TCP" 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 } 00:08:38.021 ] 00:08:38.021 }' 00:08:38.021 03:59:52 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:38.021 03:59:52 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:38.021 03:59:52 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:38.021 03:59:52 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:38.021 03:59:52 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:38.021 03:59:52 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:38.021 03:59:52 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:38.021 03:59:52 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:38.021 03:59:52 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 Malloc1 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.021 03:59:52 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.021 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.021 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.021 [2024-04-19 03:59:52.520007] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.021 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.022 03:59:52 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:38.022 03:59:52 -- common/autotest_common.sh@638 -- # local es=0 00:08:38.022 03:59:52 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:38.022 03:59:52 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:38.022 03:59:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.022 03:59:52 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:38.022 03:59:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.022 03:59:52 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:38.022 03:59:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.022 03:59:52 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:38.022 03:59:52 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:38.022 03:59:52 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:38.280 [2024-04-19 03:59:52.548689] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:38.280 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:38.280 could not add new controller: failed to write to nvme-fabrics device 00:08:38.280 03:59:52 -- common/autotest_common.sh@641 -- # es=1 00:08:38.280 03:59:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:38.280 03:59:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:38.280 03:59:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:38.280 03:59:52 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:38.280 03:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.280 03:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:38.280 03:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.280 03:59:52 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.652 03:59:53 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.652 03:59:53 -- common/autotest_common.sh@1184 -- # local i=0 00:08:39.652 03:59:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.652 03:59:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:39.652 03:59:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:41.554 03:59:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:41.554 03:59:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:41.554 03:59:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.554 03:59:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:41.554 03:59:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.554 03:59:55 -- common/autotest_common.sh@1194 -- # return 0 00:08:41.554 03:59:55 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.554 03:59:55 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.554 03:59:55 -- common/autotest_common.sh@1205 -- # local i=0 00:08:41.554 03:59:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:41.554 03:59:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.554 03:59:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:41.554 03:59:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.554 03:59:55 -- common/autotest_common.sh@1217 -- # return 0 00:08:41.554 03:59:55 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:41.554 03:59:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.554 03:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:41.554 03:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.554 03:59:56 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.554 03:59:56 -- common/autotest_common.sh@638 -- # local es=0 00:08:41.554 03:59:56 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.554 03:59:56 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:41.554 03:59:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.554 03:59:56 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:41.554 03:59:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.554 03:59:56 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:41.554 03:59:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.554 03:59:56 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:41.554 03:59:56 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:41.554 03:59:56 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.554 [2024-04-19 03:59:56.035890] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:08:41.554 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:41.554 could not add new controller: failed to write to nvme-fabrics device 00:08:41.554 03:59:56 -- common/autotest_common.sh@641 -- # es=1 00:08:41.554 03:59:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:41.554 03:59:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:41.554 03:59:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:41.554 03:59:56 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:41.554 03:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.554 03:59:56 -- common/autotest_common.sh@10 -- # set +x 00:08:41.554 03:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.554 03:59:56 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.930 03:59:57 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.930 03:59:57 -- common/autotest_common.sh@1184 -- # local i=0 00:08:42.930 03:59:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.930 03:59:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:42.930 03:59:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:44.835 03:59:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:44.835 03:59:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:44.835 03:59:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.835 03:59:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:44.835 03:59:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.835 03:59:59 -- common/autotest_common.sh@1194 -- # return 0 00:08:44.835 03:59:59 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:45.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.094 03:59:59 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:45.094 03:59:59 -- common/autotest_common.sh@1205 -- # local i=0 00:08:45.094 03:59:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:45.094 03:59:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.094 03:59:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:45.094 03:59:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:45.094 03:59:59 -- common/autotest_common.sh@1217 -- # return 0 00:08:45.094 03:59:59 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.094 03:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.094 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 03:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.094 03:59:59 -- target/rpc.sh@81 -- # seq 1 5 00:08:45.094 03:59:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:45.094 03:59:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:45.094 03:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.094 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 03:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.094 03:59:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.094 03:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.094 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 [2024-04-19 03:59:59.509683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.094 03:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.094 03:59:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:45.094 03:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.094 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 03:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.094 03:59:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:45.094 03:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:45.094 03:59:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.094 03:59:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:45.094 03:59:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:46.470 04:00:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.470 04:00:00 -- common/autotest_common.sh@1184 -- # local i=0 00:08:46.470 04:00:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.470 04:00:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:46.470 04:00:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:48.365 04:00:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:48.365 04:00:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:48.365 04:00:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:48.365 04:00:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:48.365 04:00:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:48.365 04:00:02 -- common/autotest_common.sh@1194 -- # return 0 00:08:48.365 04:00:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.623 04:00:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.623 04:00:02 -- common/autotest_common.sh@1205 -- # local i=0 00:08:48.623 04:00:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:48.623 04:00:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.623 04:00:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:48.623 04:00:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.623 04:00:02 -- common/autotest_common.sh@1217 -- # return 0 00:08:48.623 04:00:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:48.623 04:00:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 [2024-04-19 04:00:02.966421] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:48.623 04:00:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:48.623 04:00:02 -- common/autotest_common.sh@10 -- # set +x 00:08:48.623 04:00:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:48.623 04:00:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.020 04:00:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.020 04:00:04 -- common/autotest_common.sh@1184 -- # local i=0 00:08:50.020 04:00:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.020 04:00:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:50.020 04:00:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:51.920 04:00:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:51.920 04:00:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:51.920 04:00:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.920 04:00:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:51.920 04:00:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.920 04:00:06 -- common/autotest_common.sh@1194 -- # return 0 00:08:51.920 04:00:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.920 04:00:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.920 04:00:06 -- common/autotest_common.sh@1205 -- # local i=0 00:08:51.920 04:00:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:51.920 04:00:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.920 04:00:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:51.920 04:00:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.920 04:00:06 -- common/autotest_common.sh@1217 -- # return 0 00:08:51.920 04:00:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.920 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.920 04:00:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.920 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.920 04:00:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:51.920 04:00:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.920 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.920 04:00:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.920 [2024-04-19 04:00:06.428664] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.920 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.920 04:00:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.920 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.920 04:00:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:51.920 04:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.920 04:00:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.177 04:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.177 04:00:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.551 04:00:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:53.551 04:00:07 -- common/autotest_common.sh@1184 -- # local i=0 00:08:53.551 04:00:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.551 04:00:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:53.551 04:00:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:55.454 04:00:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:55.454 04:00:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:55.454 04:00:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.454 04:00:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:55.454 04:00:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.454 04:00:09 -- common/autotest_common.sh@1194 -- # return 0 00:08:55.454 04:00:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.454 04:00:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.454 04:00:09 -- common/autotest_common.sh@1205 -- # local i=0 00:08:55.454 04:00:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:55.454 04:00:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.454 04:00:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:55.454 04:00:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.454 04:00:09 -- common/autotest_common.sh@1217 -- # return 0 00:08:55.454 04:00:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:55.454 04:00:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 [2024-04-19 04:00:09.949956] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:55.454 04:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.454 04:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.454 04:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.454 04:00:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.830 04:00:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.830 04:00:11 -- common/autotest_common.sh@1184 -- # local i=0 00:08:56.830 04:00:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.830 04:00:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:56.830 04:00:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:58.728 04:00:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:58.728 04:00:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:58.728 04:00:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.986 04:00:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:58.986 04:00:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.986 04:00:13 -- common/autotest_common.sh@1194 -- # return 0 00:08:58.986 04:00:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.986 04:00:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.986 04:00:13 -- common/autotest_common.sh@1205 -- # local i=0 00:08:58.986 04:00:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:58.986 04:00:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.986 04:00:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:58.986 04:00:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.986 04:00:13 -- common/autotest_common.sh@1217 -- # return 0 00:08:58.986 04:00:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:58.986 04:00:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 [2024-04-19 04:00:13.384897] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:58.986 04:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.986 04:00:13 -- common/autotest_common.sh@10 -- # set +x 00:08:58.986 04:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.986 04:00:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.395 04:00:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.395 04:00:14 -- common/autotest_common.sh@1184 -- # local i=0 00:09:00.395 04:00:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.395 04:00:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:00.395 04:00:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:02.297 04:00:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:02.297 04:00:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:02.297 04:00:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.297 04:00:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:02.297 04:00:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.297 04:00:16 -- common/autotest_common.sh@1194 -- # return 0 00:09:02.297 04:00:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.557 04:00:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@1205 -- # local i=0 00:09:02.557 04:00:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:02.557 04:00:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:02.557 04:00:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@1217 -- # return 0 00:09:02.557 04:00:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@99 -- # seq 1 5 00:09:02.557 04:00:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:02.557 04:00:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 [2024-04-19 04:00:16.905779] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:02.557 04:00:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 [2024-04-19 04:00:16.953927] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:02.557 04:00:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.557 04:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 [2024-04-19 04:00:17.002095] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:02.557 04:00:17 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 [2024-04-19 04:00:17.054278] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.557 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.557 04:00:17 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.557 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.557 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:02.816 04:00:17 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 [2024-04-19 04:00:17.102451] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:02.816 04:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.816 04:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:02.816 04:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.816 04:00:17 -- target/rpc.sh@110 -- # stats='{ 00:09:02.816 "tick_rate": 2200000000, 00:09:02.816 "poll_groups": [ 00:09:02.816 { 00:09:02.816 "name": "nvmf_tgt_poll_group_0", 00:09:02.816 "admin_qpairs": 2, 00:09:02.816 "io_qpairs": 196, 00:09:02.816 "current_admin_qpairs": 0, 00:09:02.816 "current_io_qpairs": 0, 00:09:02.816 "pending_bdev_io": 0, 00:09:02.816 "completed_nvme_io": 345, 00:09:02.816 "transports": [ 00:09:02.816 { 00:09:02.816 "trtype": "TCP" 00:09:02.816 } 00:09:02.816 ] 00:09:02.816 }, 00:09:02.816 { 00:09:02.816 "name": "nvmf_tgt_poll_group_1", 00:09:02.816 "admin_qpairs": 2, 00:09:02.816 "io_qpairs": 196, 00:09:02.816 "current_admin_qpairs": 0, 00:09:02.816 "current_io_qpairs": 0, 00:09:02.816 "pending_bdev_io": 0, 00:09:02.816 "completed_nvme_io": 247, 00:09:02.816 "transports": [ 00:09:02.816 { 00:09:02.816 "trtype": "TCP" 00:09:02.816 } 00:09:02.816 ] 00:09:02.816 }, 00:09:02.816 { 00:09:02.816 "name": "nvmf_tgt_poll_group_2", 00:09:02.816 "admin_qpairs": 1, 00:09:02.816 "io_qpairs": 196, 00:09:02.816 "current_admin_qpairs": 0, 00:09:02.816 "current_io_qpairs": 0, 00:09:02.816 "pending_bdev_io": 0, 00:09:02.816 "completed_nvme_io": 296, 00:09:02.816 "transports": [ 00:09:02.816 { 00:09:02.816 "trtype": "TCP" 00:09:02.816 } 00:09:02.816 ] 00:09:02.816 }, 00:09:02.816 { 00:09:02.816 "name": "nvmf_tgt_poll_group_3", 00:09:02.816 "admin_qpairs": 2, 00:09:02.816 "io_qpairs": 196, 00:09:02.816 "current_admin_qpairs": 0, 00:09:02.816 "current_io_qpairs": 0, 00:09:02.816 "pending_bdev_io": 0, 00:09:02.816 "completed_nvme_io": 246, 00:09:02.816 "transports": [ 00:09:02.816 { 00:09:02.816 "trtype": "TCP" 00:09:02.816 } 00:09:02.816 ] 00:09:02.816 } 00:09:02.816 ] 00:09:02.816 }' 00:09:02.816 04:00:17 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:02.816 04:00:17 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:02.816 04:00:17 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:02.816 04:00:17 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:02.816 04:00:17 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:02.816 04:00:17 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:02.816 04:00:17 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:02.816 04:00:17 -- target/rpc.sh@123 -- # nvmftestfini 00:09:02.816 04:00:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:02.816 04:00:17 -- nvmf/common.sh@117 -- # sync 00:09:02.816 04:00:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.816 04:00:17 -- nvmf/common.sh@120 -- # set +e 00:09:02.816 04:00:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.816 04:00:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.816 rmmod nvme_tcp 00:09:02.816 rmmod nvme_fabrics 00:09:02.816 rmmod nvme_keyring 00:09:02.816 04:00:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.816 04:00:17 -- nvmf/common.sh@124 -- # set -e 00:09:02.816 04:00:17 -- nvmf/common.sh@125 -- # return 0 00:09:02.816 04:00:17 -- nvmf/common.sh@478 -- # '[' -n 3696869 ']' 00:09:02.816 04:00:17 -- nvmf/common.sh@479 -- # killprocess 3696869 00:09:02.816 04:00:17 -- common/autotest_common.sh@936 -- # '[' -z 3696869 ']' 00:09:02.816 04:00:17 -- common/autotest_common.sh@940 -- # kill -0 3696869 00:09:02.816 04:00:17 -- common/autotest_common.sh@941 -- # uname 00:09:02.816 04:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.816 04:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3696869 00:09:03.075 04:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.075 04:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.075 04:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3696869' 00:09:03.075 killing process with pid 3696869 00:09:03.075 04:00:17 -- common/autotest_common.sh@955 -- # kill 3696869 00:09:03.075 04:00:17 -- common/autotest_common.sh@960 -- # wait 3696869 00:09:03.335 04:00:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:03.335 04:00:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:03.335 04:00:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:03.335 04:00:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.335 04:00:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.335 04:00:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.335 04:00:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.335 04:00:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.241 04:00:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.241 00:09:05.241 real 0m33.757s 00:09:05.241 user 1m45.645s 00:09:05.241 sys 0m5.740s 00:09:05.241 04:00:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.241 04:00:19 -- common/autotest_common.sh@10 -- # set +x 00:09:05.241 ************************************ 00:09:05.241 END TEST nvmf_rpc 00:09:05.241 ************************************ 00:09:05.241 04:00:19 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:05.241 04:00:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:05.241 04:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.241 04:00:19 -- common/autotest_common.sh@10 -- # set +x 00:09:05.501 ************************************ 00:09:05.501 START TEST nvmf_invalid 00:09:05.501 ************************************ 00:09:05.501 04:00:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:05.501 * Looking for test storage... 00:09:05.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.501 04:00:19 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.501 04:00:19 -- nvmf/common.sh@7 -- # uname -s 00:09:05.501 04:00:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.501 04:00:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.501 04:00:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.501 04:00:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.501 04:00:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.501 04:00:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.501 04:00:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.501 04:00:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.501 04:00:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.501 04:00:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.501 04:00:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:05.501 04:00:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:05.501 04:00:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.501 04:00:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.501 04:00:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.501 04:00:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.501 04:00:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.501 04:00:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.501 04:00:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.501 04:00:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.501 04:00:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.501 04:00:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.501 04:00:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.501 04:00:19 -- paths/export.sh@5 -- # export PATH 00:09:05.501 04:00:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.501 04:00:19 -- nvmf/common.sh@47 -- # : 0 00:09:05.501 04:00:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.501 04:00:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.501 04:00:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.501 04:00:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.501 04:00:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.501 04:00:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.501 04:00:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.501 04:00:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.501 04:00:19 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:05.501 04:00:19 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.501 04:00:19 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:05.501 04:00:19 -- target/invalid.sh@14 -- # target=foobar 00:09:05.501 04:00:19 -- target/invalid.sh@16 -- # RANDOM=0 00:09:05.501 04:00:19 -- target/invalid.sh@34 -- # nvmftestinit 00:09:05.501 04:00:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:05.501 04:00:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.501 04:00:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:05.501 04:00:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:05.501 04:00:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:05.501 04:00:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.501 04:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.501 04:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.501 04:00:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:05.501 04:00:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:05.501 04:00:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.501 04:00:19 -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 04:00:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:12.085 04:00:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.085 04:00:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.085 04:00:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.085 04:00:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.085 04:00:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.085 04:00:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.085 04:00:25 -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.085 04:00:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.085 04:00:25 -- nvmf/common.sh@296 -- # e810=() 00:09:12.085 04:00:25 -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.085 04:00:25 -- nvmf/common.sh@297 -- # x722=() 00:09:12.085 04:00:25 -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.085 04:00:25 -- nvmf/common.sh@298 -- # mlx=() 00:09:12.085 04:00:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.085 04:00:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.085 04:00:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.085 04:00:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:12.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:12.085 04:00:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.085 04:00:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:12.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:12.085 04:00:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.085 04:00:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.085 04:00:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.085 04:00:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:12.085 Found net devices under 0000:af:00.0: cvl_0_0 00:09:12.085 04:00:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.085 04:00:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.085 04:00:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.085 04:00:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:12.085 Found net devices under 0000:af:00.1: cvl_0_1 00:09:12.085 04:00:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:12.085 04:00:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:12.085 04:00:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.085 04:00:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.085 04:00:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.085 04:00:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.085 04:00:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.085 04:00:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.085 04:00:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.085 04:00:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.085 04:00:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.085 04:00:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.085 04:00:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.085 04:00:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.085 04:00:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.085 04:00:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.085 04:00:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.085 04:00:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.085 04:00:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.085 04:00:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.085 04:00:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:09:12.085 00:09:12.085 --- 10.0.0.2 ping statistics --- 00:09:12.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.085 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:12.085 04:00:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:09:12.085 00:09:12.085 --- 10.0.0.1 ping statistics --- 00:09:12.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.085 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:12.085 04:00:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.085 04:00:25 -- nvmf/common.sh@411 -- # return 0 00:09:12.085 04:00:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:12.085 04:00:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.085 04:00:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:12.085 04:00:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.085 04:00:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:12.085 04:00:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:12.085 04:00:25 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:12.085 04:00:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:12.085 04:00:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:12.085 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 04:00:25 -- nvmf/common.sh@470 -- # nvmfpid=3705958 00:09:12.085 04:00:25 -- nvmf/common.sh@471 -- # waitforlisten 3705958 00:09:12.085 04:00:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.085 04:00:25 -- common/autotest_common.sh@817 -- # '[' -z 3705958 ']' 00:09:12.085 04:00:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.085 04:00:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:12.085 04:00:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.085 04:00:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:12.085 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 [2024-04-19 04:00:25.705893] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:12.086 [2024-04-19 04:00:25.705946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.086 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.086 [2024-04-19 04:00:25.790354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.086 [2024-04-19 04:00:25.877452] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.086 [2024-04-19 04:00:25.877498] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.086 [2024-04-19 04:00:25.877509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.086 [2024-04-19 04:00:25.877518] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.086 [2024-04-19 04:00:25.877526] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.086 [2024-04-19 04:00:25.877581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.086 [2024-04-19 04:00:25.877681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.086 [2024-04-19 04:00:25.877774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.086 [2024-04-19 04:00:25.877774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.086 04:00:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:12.086 04:00:25 -- common/autotest_common.sh@850 -- # return 0 00:09:12.086 04:00:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:12.086 04:00:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:12.086 04:00:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.086 04:00:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.086 04:00:26 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:12.086 04:00:26 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7349 00:09:12.086 [2024-04-19 04:00:26.260531] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:12.086 04:00:26 -- target/invalid.sh@40 -- # out='request: 00:09:12.086 { 00:09:12.086 "nqn": "nqn.2016-06.io.spdk:cnode7349", 00:09:12.086 "tgt_name": "foobar", 00:09:12.086 "method": "nvmf_create_subsystem", 00:09:12.086 "req_id": 1 00:09:12.086 } 00:09:12.086 Got JSON-RPC error response 00:09:12.086 response: 00:09:12.086 { 00:09:12.086 "code": -32603, 00:09:12.086 "message": "Unable to find target foobar" 00:09:12.086 }' 00:09:12.086 04:00:26 -- target/invalid.sh@41 -- # [[ request: 00:09:12.086 { 00:09:12.086 "nqn": "nqn.2016-06.io.spdk:cnode7349", 00:09:12.086 "tgt_name": "foobar", 00:09:12.086 "method": "nvmf_create_subsystem", 00:09:12.086 "req_id": 1 00:09:12.086 } 00:09:12.086 Got JSON-RPC error response 00:09:12.086 response: 00:09:12.086 { 00:09:12.086 "code": -32603, 00:09:12.086 "message": "Unable to find target foobar" 00:09:12.086 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:12.086 04:00:26 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:12.086 04:00:26 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24768 00:09:12.086 [2024-04-19 04:00:26.517482] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24768: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:12.086 04:00:26 -- target/invalid.sh@45 -- # out='request: 00:09:12.086 { 00:09:12.086 "nqn": "nqn.2016-06.io.spdk:cnode24768", 00:09:12.086 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:12.086 "method": "nvmf_create_subsystem", 00:09:12.086 "req_id": 1 00:09:12.086 } 00:09:12.086 Got JSON-RPC error response 00:09:12.086 response: 00:09:12.086 { 00:09:12.086 "code": -32602, 00:09:12.086 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:12.086 }' 00:09:12.086 04:00:26 -- target/invalid.sh@46 -- # [[ request: 00:09:12.086 { 00:09:12.086 "nqn": "nqn.2016-06.io.spdk:cnode24768", 00:09:12.086 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:12.086 "method": "nvmf_create_subsystem", 00:09:12.086 "req_id": 1 00:09:12.086 } 00:09:12.086 Got JSON-RPC error response 00:09:12.086 response: 00:09:12.086 { 00:09:12.086 "code": -32602, 00:09:12.086 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:12.086 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:12.086 04:00:26 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:12.086 04:00:26 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14635 00:09:12.346 [2024-04-19 04:00:26.778433] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14635: invalid model number 'SPDK_Controller' 00:09:12.346 04:00:26 -- target/invalid.sh@50 -- # out='request: 00:09:12.346 { 00:09:12.346 "nqn": "nqn.2016-06.io.spdk:cnode14635", 00:09:12.346 "model_number": "SPDK_Controller\u001f", 00:09:12.346 "method": "nvmf_create_subsystem", 00:09:12.346 "req_id": 1 00:09:12.346 } 00:09:12.346 Got JSON-RPC error response 00:09:12.346 response: 00:09:12.346 { 00:09:12.346 "code": -32602, 00:09:12.346 "message": "Invalid MN SPDK_Controller\u001f" 00:09:12.346 }' 00:09:12.346 04:00:26 -- target/invalid.sh@51 -- # [[ request: 00:09:12.346 { 00:09:12.346 "nqn": "nqn.2016-06.io.spdk:cnode14635", 00:09:12.346 "model_number": "SPDK_Controller\u001f", 00:09:12.346 "method": "nvmf_create_subsystem", 00:09:12.346 "req_id": 1 00:09:12.346 } 00:09:12.346 Got JSON-RPC error response 00:09:12.346 response: 00:09:12.346 { 00:09:12.346 "code": -32602, 00:09:12.346 "message": "Invalid MN SPDK_Controller\u001f" 00:09:12.346 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:12.346 04:00:26 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:12.346 04:00:26 -- target/invalid.sh@19 -- # local length=21 ll 00:09:12.346 04:00:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:12.346 04:00:26 -- target/invalid.sh@21 -- # local chars 00:09:12.346 04:00:26 -- target/invalid.sh@22 -- # local string 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 127 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 120 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=x 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 109 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=m 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 33 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+='!' 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 113 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=q 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 50 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=2 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 70 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=F 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 82 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=R 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # printf %x 46 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:12.346 04:00:26 -- target/invalid.sh@25 -- # string+=. 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.346 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 114 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=r 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 56 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=8 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 112 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=p 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 89 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=Y 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 76 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=L 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 85 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=U 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 92 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+='\' 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 69 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=E 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 89 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=Y 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 92 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+='\' 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 101 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+=e 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # printf %x 40 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:12.605 04:00:26 -- target/invalid.sh@25 -- # string+='(' 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.605 04:00:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.605 04:00:26 -- target/invalid.sh@28 -- # [[  == \- ]] 00:09:12.605 04:00:26 -- target/invalid.sh@31 -- # echo 'xm!q2FR.r8pYLU\EY\e(' 00:09:12.605 04:00:26 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'xm!q2FR.r8pYLU\EY\e(' nqn.2016-06.io.spdk:cnode22248 00:09:12.865 [2024-04-19 04:00:27.167894] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22248: invalid serial number 'xm!q2FR.r8pYLU\EY\e(' 00:09:12.865 04:00:27 -- target/invalid.sh@54 -- # out='request: 00:09:12.865 { 00:09:12.865 "nqn": "nqn.2016-06.io.spdk:cnode22248", 00:09:12.865 "serial_number": "\u007fxm!q2FR.r8pYLU\\EY\\e(", 00:09:12.865 "method": "nvmf_create_subsystem", 00:09:12.865 "req_id": 1 00:09:12.865 } 00:09:12.865 Got JSON-RPC error response 00:09:12.865 response: 00:09:12.865 { 00:09:12.865 "code": -32602, 00:09:12.865 "message": "Invalid SN \u007fxm!q2FR.r8pYLU\\EY\\e(" 00:09:12.865 }' 00:09:12.865 04:00:27 -- target/invalid.sh@55 -- # [[ request: 00:09:12.865 { 00:09:12.865 "nqn": "nqn.2016-06.io.spdk:cnode22248", 00:09:12.865 "serial_number": "\u007fxm!q2FR.r8pYLU\\EY\\e(", 00:09:12.865 "method": "nvmf_create_subsystem", 00:09:12.865 "req_id": 1 00:09:12.865 } 00:09:12.865 Got JSON-RPC error response 00:09:12.865 response: 00:09:12.865 { 00:09:12.865 "code": -32602, 00:09:12.865 "message": "Invalid SN \u007fxm!q2FR.r8pYLU\\EY\\e(" 00:09:12.865 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:12.865 04:00:27 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:12.865 04:00:27 -- target/invalid.sh@19 -- # local length=41 ll 00:09:12.865 04:00:27 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:12.865 04:00:27 -- target/invalid.sh@21 -- # local chars 00:09:12.865 04:00:27 -- target/invalid.sh@22 -- # local string 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 100 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=d 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 115 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=s 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 98 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=b 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 120 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=x 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 94 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='^' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 87 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=W 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 73 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=I 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 94 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='^' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 69 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=E 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 60 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='<' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 81 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=Q 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 98 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=b 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 105 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=i 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 74 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=J 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 54 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=6 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 121 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=y 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 86 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=V 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 64 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=@ 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 42 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='*' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 32 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=' ' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 39 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=\' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 35 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='#' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 127 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 38 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+='&' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 32 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=' ' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 39 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=\' 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 78 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=N 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 45 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=- 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 74 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=J 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 110 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # string+=n 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:12.865 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:12.865 04:00:27 -- target/invalid.sh@25 -- # printf %x 83 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=S 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 57 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=9 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 99 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=c 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 38 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+='&' 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 60 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+='<' 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 36 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+='$' 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 121 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=y 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 78 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=N 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 121 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=y 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 95 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+=_ 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # printf %x 91 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:13.124 04:00:27 -- target/invalid.sh@25 -- # string+='[' 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:13.124 04:00:27 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:13.124 04:00:27 -- target/invalid.sh@28 -- # [[ d == \- ]] 00:09:13.124 04:00:27 -- target/invalid.sh@31 -- # echo 'dsbx^WI^E /dev/null' 00:09:15.459 04:00:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.999 04:00:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.999 00:09:17.999 real 0m12.144s 00:09:17.999 user 0m20.273s 00:09:17.999 sys 0m5.360s 00:09:17.999 04:00:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:17.999 04:00:31 -- common/autotest_common.sh@10 -- # set +x 00:09:17.999 ************************************ 00:09:17.999 END TEST nvmf_invalid 00:09:17.999 ************************************ 00:09:17.999 04:00:32 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:17.999 04:00:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:17.999 04:00:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.999 04:00:32 -- common/autotest_common.sh@10 -- # set +x 00:09:17.999 ************************************ 00:09:17.999 START TEST nvmf_abort 00:09:17.999 ************************************ 00:09:17.999 04:00:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:17.999 * Looking for test storage... 00:09:17.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.999 04:00:32 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.999 04:00:32 -- nvmf/common.sh@7 -- # uname -s 00:09:17.999 04:00:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.999 04:00:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.999 04:00:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.999 04:00:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.999 04:00:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.999 04:00:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.999 04:00:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.999 04:00:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.999 04:00:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.999 04:00:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.999 04:00:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:17.999 04:00:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:17.999 04:00:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.999 04:00:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.999 04:00:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.999 04:00:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.999 04:00:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.999 04:00:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.999 04:00:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.999 04:00:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.000 04:00:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.000 04:00:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.000 04:00:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.000 04:00:32 -- paths/export.sh@5 -- # export PATH 00:09:18.000 04:00:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.000 04:00:32 -- nvmf/common.sh@47 -- # : 0 00:09:18.000 04:00:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.000 04:00:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.000 04:00:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.000 04:00:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.000 04:00:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.000 04:00:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.000 04:00:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.000 04:00:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.000 04:00:32 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.000 04:00:32 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:18.000 04:00:32 -- target/abort.sh@14 -- # nvmftestinit 00:09:18.000 04:00:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:18.000 04:00:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.000 04:00:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:18.000 04:00:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:18.000 04:00:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:18.000 04:00:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.000 04:00:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.000 04:00:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.000 04:00:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:18.000 04:00:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:18.000 04:00:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.000 04:00:32 -- common/autotest_common.sh@10 -- # set +x 00:09:23.312 04:00:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:23.312 04:00:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.312 04:00:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.312 04:00:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.312 04:00:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.312 04:00:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.312 04:00:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.312 04:00:37 -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.312 04:00:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.312 04:00:37 -- nvmf/common.sh@296 -- # e810=() 00:09:23.312 04:00:37 -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.312 04:00:37 -- nvmf/common.sh@297 -- # x722=() 00:09:23.312 04:00:37 -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.312 04:00:37 -- nvmf/common.sh@298 -- # mlx=() 00:09:23.312 04:00:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.312 04:00:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.312 04:00:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.312 04:00:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.312 04:00:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.312 04:00:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.312 04:00:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:23.312 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:23.312 04:00:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.312 04:00:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:23.312 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:23.312 04:00:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.312 04:00:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.312 04:00:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.312 04:00:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.312 04:00:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.312 04:00:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.312 04:00:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:23.312 Found net devices under 0000:af:00.0: cvl_0_0 00:09:23.312 04:00:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.313 04:00:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.313 04:00:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.313 04:00:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.313 04:00:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.313 04:00:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:23.313 Found net devices under 0000:af:00.1: cvl_0_1 00:09:23.313 04:00:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.313 04:00:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:23.313 04:00:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:23.313 04:00:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:23.313 04:00:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:23.313 04:00:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:23.313 04:00:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.313 04:00:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.313 04:00:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.313 04:00:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.313 04:00:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.313 04:00:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.313 04:00:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.313 04:00:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.313 04:00:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.313 04:00:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.313 04:00:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.313 04:00:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.313 04:00:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.572 04:00:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.572 04:00:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.572 04:00:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.572 04:00:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.572 04:00:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.572 04:00:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.572 04:00:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:09:23.572 00:09:23.572 --- 10.0.0.2 ping statistics --- 00:09:23.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.572 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:23.572 04:00:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:09:23.572 00:09:23.572 --- 10.0.0.1 ping statistics --- 00:09:23.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.572 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:09:23.572 04:00:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.572 04:00:38 -- nvmf/common.sh@411 -- # return 0 00:09:23.572 04:00:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:23.572 04:00:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.830 04:00:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:23.830 04:00:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:23.830 04:00:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.830 04:00:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:23.830 04:00:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:23.830 04:00:38 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:23.830 04:00:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:23.830 04:00:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.830 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.830 04:00:38 -- nvmf/common.sh@470 -- # nvmfpid=3710502 00:09:23.830 04:00:38 -- nvmf/common.sh@471 -- # waitforlisten 3710502 00:09:23.830 04:00:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:23.830 04:00:38 -- common/autotest_common.sh@817 -- # '[' -z 3710502 ']' 00:09:23.830 04:00:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.830 04:00:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:23.830 04:00:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.830 04:00:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:23.830 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:23.830 [2024-04-19 04:00:38.188837] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:23.830 [2024-04-19 04:00:38.188880] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.830 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.830 [2024-04-19 04:00:38.254237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.830 [2024-04-19 04:00:38.339750] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.830 [2024-04-19 04:00:38.339798] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.830 [2024-04-19 04:00:38.339808] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.830 [2024-04-19 04:00:38.339817] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.830 [2024-04-19 04:00:38.339825] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.830 [2024-04-19 04:00:38.339932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.830 [2024-04-19 04:00:38.340046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.830 [2024-04-19 04:00:38.340047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.089 04:00:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:24.089 04:00:38 -- common/autotest_common.sh@850 -- # return 0 00:09:24.089 04:00:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:24.089 04:00:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 04:00:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.089 04:00:38 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 [2024-04-19 04:00:38.488527] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 Malloc0 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 Delay0 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 [2024-04-19 04:00:38.563176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.089 04:00:38 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.089 04:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.089 04:00:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.089 04:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.090 04:00:38 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:24.090 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.349 [2024-04-19 04:00:38.685225] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:26.253 Initializing NVMe Controllers 00:09:26.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:26.253 controller IO queue size 128 less than required 00:09:26.253 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:26.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:26.253 Initialization complete. Launching workers. 00:09:26.253 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29223 00:09:26.253 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29288, failed to submit 62 00:09:26.253 success 29227, unsuccess 61, failed 0 00:09:26.253 04:00:40 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.253 04:00:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.253 04:00:40 -- common/autotest_common.sh@10 -- # set +x 00:09:26.253 04:00:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.253 04:00:40 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:26.253 04:00:40 -- target/abort.sh@38 -- # nvmftestfini 00:09:26.253 04:00:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:26.253 04:00:40 -- nvmf/common.sh@117 -- # sync 00:09:26.253 04:00:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.253 04:00:40 -- nvmf/common.sh@120 -- # set +e 00:09:26.253 04:00:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.253 04:00:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.253 rmmod nvme_tcp 00:09:26.253 rmmod nvme_fabrics 00:09:26.253 rmmod nvme_keyring 00:09:26.512 04:00:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.512 04:00:40 -- nvmf/common.sh@124 -- # set -e 00:09:26.512 04:00:40 -- nvmf/common.sh@125 -- # return 0 00:09:26.512 04:00:40 -- nvmf/common.sh@478 -- # '[' -n 3710502 ']' 00:09:26.512 04:00:40 -- nvmf/common.sh@479 -- # killprocess 3710502 00:09:26.512 04:00:40 -- common/autotest_common.sh@936 -- # '[' -z 3710502 ']' 00:09:26.512 04:00:40 -- common/autotest_common.sh@940 -- # kill -0 3710502 00:09:26.512 04:00:40 -- common/autotest_common.sh@941 -- # uname 00:09:26.512 04:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.512 04:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3710502 00:09:26.512 04:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:26.512 04:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:26.512 04:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3710502' 00:09:26.512 killing process with pid 3710502 00:09:26.512 04:00:40 -- common/autotest_common.sh@955 -- # kill 3710502 00:09:26.512 04:00:40 -- common/autotest_common.sh@960 -- # wait 3710502 00:09:26.772 04:00:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:26.772 04:00:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:26.772 04:00:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:26.772 04:00:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.772 04:00:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.772 04:00:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.772 04:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.772 04:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.771 04:00:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.771 00:09:28.771 real 0m10.995s 00:09:28.771 user 0m11.413s 00:09:28.771 sys 0m5.262s 00:09:28.771 04:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.771 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.771 ************************************ 00:09:28.771 END TEST nvmf_abort 00:09:28.772 ************************************ 00:09:28.772 04:00:43 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:28.772 04:00:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:28.772 04:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.772 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 ************************************ 00:09:29.029 START TEST nvmf_ns_hotplug_stress 00:09:29.029 ************************************ 00:09:29.029 04:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:29.029 * Looking for test storage... 00:09:29.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.029 04:00:43 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.029 04:00:43 -- nvmf/common.sh@7 -- # uname -s 00:09:29.029 04:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.029 04:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.029 04:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.029 04:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.029 04:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.029 04:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.029 04:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.029 04:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.029 04:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.029 04:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.029 04:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:29.029 04:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:29.029 04:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.029 04:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.029 04:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.029 04:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.029 04:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.029 04:00:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.029 04:00:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.029 04:00:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.029 04:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.029 04:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.029 04:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.029 04:00:43 -- paths/export.sh@5 -- # export PATH 00:09:29.029 04:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.029 04:00:43 -- nvmf/common.sh@47 -- # : 0 00:09:29.029 04:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.029 04:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.029 04:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.029 04:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.029 04:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.029 04:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.029 04:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.029 04:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.029 04:00:43 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.029 04:00:43 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:29.029 04:00:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:29.029 04:00:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.029 04:00:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:29.029 04:00:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:29.029 04:00:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:29.029 04:00:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.029 04:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.029 04:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.029 04:00:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:29.029 04:00:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:29.029 04:00:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.029 04:00:43 -- common/autotest_common.sh@10 -- # set +x 00:09:35.595 04:00:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:35.595 04:00:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:35.595 04:00:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:35.595 04:00:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:35.595 04:00:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:35.595 04:00:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:35.595 04:00:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:35.595 04:00:48 -- nvmf/common.sh@295 -- # net_devs=() 00:09:35.595 04:00:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:35.595 04:00:48 -- nvmf/common.sh@296 -- # e810=() 00:09:35.595 04:00:48 -- nvmf/common.sh@296 -- # local -ga e810 00:09:35.595 04:00:48 -- nvmf/common.sh@297 -- # x722=() 00:09:35.595 04:00:48 -- nvmf/common.sh@297 -- # local -ga x722 00:09:35.595 04:00:48 -- nvmf/common.sh@298 -- # mlx=() 00:09:35.595 04:00:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:35.595 04:00:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.595 04:00:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:35.595 04:00:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:35.595 04:00:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.595 04:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:35.595 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:35.595 04:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:35.595 04:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:35.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:35.595 04:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.595 04:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.595 04:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.595 04:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:35.595 Found net devices under 0000:af:00.0: cvl_0_0 00:09:35.595 04:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.595 04:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:35.595 04:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.595 04:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.595 04:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:35.595 Found net devices under 0000:af:00.1: cvl_0_1 00:09:35.595 04:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.595 04:00:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:35.595 04:00:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:35.595 04:00:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:35.595 04:00:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.595 04:00:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.595 04:00:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.595 04:00:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:35.595 04:00:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.595 04:00:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.595 04:00:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:35.595 04:00:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.595 04:00:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.595 04:00:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:35.595 04:00:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:35.595 04:00:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.595 04:00:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.595 04:00:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.595 04:00:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.595 04:00:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:35.595 04:00:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.595 04:00:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.595 04:00:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.595 04:00:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:35.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:09:35.595 00:09:35.595 --- 10.0.0.2 ping statistics --- 00:09:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.595 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:09:35.595 04:00:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:09:35.595 00:09:35.595 --- 10.0.0.1 ping statistics --- 00:09:35.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.595 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:35.595 04:00:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.595 04:00:49 -- nvmf/common.sh@411 -- # return 0 00:09:35.595 04:00:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:35.595 04:00:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.595 04:00:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:35.595 04:00:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:35.595 04:00:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.595 04:00:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:35.595 04:00:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:35.595 04:00:49 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:35.595 04:00:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:35.595 04:00:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:35.595 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.595 04:00:49 -- nvmf/common.sh@470 -- # nvmfpid=3714663 00:09:35.595 04:00:49 -- nvmf/common.sh@471 -- # waitforlisten 3714663 00:09:35.595 04:00:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:35.595 04:00:49 -- common/autotest_common.sh@817 -- # '[' -z 3714663 ']' 00:09:35.596 04:00:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.596 04:00:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.596 04:00:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.596 04:00:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.596 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 [2024-04-19 04:00:49.213153] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:09:35.596 [2024-04-19 04:00:49.213206] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.596 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.596 [2024-04-19 04:00:49.291510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.596 [2024-04-19 04:00:49.380102] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.596 [2024-04-19 04:00:49.380145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.596 [2024-04-19 04:00:49.380156] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.596 [2024-04-19 04:00:49.380166] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.596 [2024-04-19 04:00:49.380173] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.596 [2024-04-19 04:00:49.380284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.596 [2024-04-19 04:00:49.380398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.596 [2024-04-19 04:00:49.380399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.596 04:00:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.596 04:00:49 -- common/autotest_common.sh@850 -- # return 0 00:09:35.596 04:00:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:35.596 04:00:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:35.596 04:00:49 -- common/autotest_common.sh@10 -- # set +x 00:09:35.596 04:00:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.596 04:00:49 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:35.596 04:00:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.596 [2024-04-19 04:00:49.661207] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.596 04:00:49 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.596 04:00:49 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.596 [2024-04-19 04:00:50.015965] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.596 04:00:50 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.854 04:00:50 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:36.111 Malloc0 00:09:36.111 04:00:50 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:36.369 Delay0 00:09:36.369 04:00:50 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.632 04:00:51 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:36.890 NULL1 00:09:36.890 04:00:51 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:37.149 04:00:51 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3715205 00:09:37.149 04:00:51 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:37.149 04:00:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:37.149 04:00:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.149 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.525 Read completed with error (sct=0, sc=11) 00:09:38.525 04:00:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.525 04:00:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:38.525 04:00:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:38.783 true 00:09:38.783 04:00:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:38.783 04:00:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.717 04:00:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.975 04:00:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:39.975 04:00:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:40.233 true 00:09:40.233 04:00:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:40.233 04:00:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.492 04:00:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.749 04:00:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:40.749 04:00:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:40.749 true 00:09:41.007 04:00:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:41.007 04:00:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.279 04:00:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.279 04:00:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:41.279 04:00:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:41.541 true 00:09:41.541 04:00:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:41.541 04:00:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.926 04:00:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.926 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.926 04:00:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:42.926 04:00:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:43.188 true 00:09:43.188 04:00:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:43.188 04:00:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.457 04:00:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.715 04:00:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:43.715 04:00:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:43.973 true 00:09:43.973 04:00:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:43.973 04:00:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.913 04:00:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.171 04:00:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:45.171 04:00:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:45.429 true 00:09:45.429 04:00:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:45.429 04:00:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.688 04:00:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.947 04:01:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:45.947 04:01:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:45.947 true 00:09:46.205 04:01:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:46.205 04:01:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.141 04:01:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.141 04:01:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:47.141 04:01:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:47.399 true 00:09:47.399 04:01:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:47.399 04:01:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.657 04:01:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.916 04:01:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:47.916 04:01:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:48.174 true 00:09:48.174 04:01:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:48.174 04:01:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.109 04:01:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.368 04:01:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:49.368 04:01:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:49.626 true 00:09:49.626 04:01:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:49.626 04:01:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.885 04:01:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.158 04:01:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:50.159 04:01:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:50.159 true 00:09:50.431 04:01:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:50.431 04:01:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.002 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.261 04:01:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.520 04:01:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:51.520 04:01:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:51.520 true 00:09:51.520 04:01:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:51.520 04:01:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.779 04:01:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.042 04:01:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:52.042 04:01:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:52.340 true 00:09:52.340 04:01:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:52.340 04:01:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.275 04:01:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.534 04:01:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:53.534 04:01:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:53.792 true 00:09:53.792 04:01:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:53.792 04:01:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.050 04:01:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.319 04:01:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:54.320 04:01:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:54.581 true 00:09:54.581 04:01:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:54.581 04:01:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.516 04:01:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.775 04:01:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:55.775 04:01:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:56.033 true 00:09:56.033 04:01:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:56.033 04:01:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.292 04:01:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.551 04:01:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:56.551 04:01:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:56.551 true 00:09:56.810 04:01:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:56.810 04:01:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.746 04:01:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.746 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.010 04:01:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:58.010 04:01:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:58.270 true 00:09:58.270 04:01:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:58.270 04:01:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.531 04:01:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.804 04:01:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:58.804 04:01:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:58.804 true 00:09:59.069 04:01:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:09:59.069 04:01:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.006 04:01:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.006 04:01:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:00.006 04:01:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:00.264 true 00:10:00.264 04:01:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:00.264 04:01:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.523 04:01:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.784 04:01:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:00.784 04:01:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:01.045 true 00:10:01.045 04:01:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:01.045 04:01:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.982 04:01:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.241 04:01:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:02.241 04:01:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:02.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.499 true 00:10:02.499 04:01:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:02.499 04:01:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.757 04:01:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.015 04:01:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:03.015 04:01:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:03.275 true 00:10:03.275 04:01:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:03.275 04:01:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.212 04:01:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.471 04:01:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:04.471 04:01:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:04.471 true 00:10:04.471 04:01:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:04.471 04:01:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.730 04:01:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.989 04:01:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:04.989 04:01:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:05.248 true 00:10:05.248 04:01:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:05.248 04:01:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.186 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.186 04:01:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.446 04:01:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:06.446 04:01:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:06.705 true 00:10:06.705 04:01:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:06.705 04:01:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.964 04:01:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.224 04:01:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:07.224 04:01:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:07.483 true 00:10:07.483 04:01:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:07.483 04:01:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.421 Initializing NVMe Controllers 00:10:08.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.421 Controller IO queue size 128, less than required. 00:10:08.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:08.422 Controller IO queue size 128, less than required. 00:10:08.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:08.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:08.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:08.422 Initialization complete. Launching workers. 00:10:08.422 ======================================================== 00:10:08.422 Latency(us) 00:10:08.422 Device Information : IOPS MiB/s Average min max 00:10:08.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 717.23 0.35 101980.96 3404.24 1016851.09 00:10:08.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14216.40 6.94 9003.66 1896.79 562987.44 00:10:08.422 ======================================================== 00:10:08.422 Total : 14933.63 7.29 13469.18 1896.79 1016851.09 00:10:08.422 00:10:08.422 04:01:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.681 04:01:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:08.681 04:01:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:08.939 true 00:10:08.939 04:01:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3715205 00:10:08.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3715205) - No such process 00:10:08.939 04:01:23 -- target/ns_hotplug_stress.sh@44 -- # wait 3715205 00:10:08.939 04:01:23 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:08.939 04:01:23 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:08.939 04:01:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:08.939 04:01:23 -- nvmf/common.sh@117 -- # sync 00:10:08.939 04:01:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:08.939 04:01:23 -- nvmf/common.sh@120 -- # set +e 00:10:08.939 04:01:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:08.939 04:01:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:08.939 rmmod nvme_tcp 00:10:08.939 rmmod nvme_fabrics 00:10:08.939 rmmod nvme_keyring 00:10:08.939 04:01:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:08.939 04:01:23 -- nvmf/common.sh@124 -- # set -e 00:10:08.939 04:01:23 -- nvmf/common.sh@125 -- # return 0 00:10:08.939 04:01:23 -- nvmf/common.sh@478 -- # '[' -n 3714663 ']' 00:10:08.939 04:01:23 -- nvmf/common.sh@479 -- # killprocess 3714663 00:10:08.939 04:01:23 -- common/autotest_common.sh@936 -- # '[' -z 3714663 ']' 00:10:08.939 04:01:23 -- common/autotest_common.sh@940 -- # kill -0 3714663 00:10:08.939 04:01:23 -- common/autotest_common.sh@941 -- # uname 00:10:08.939 04:01:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.939 04:01:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3714663 00:10:08.939 04:01:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:08.939 04:01:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:08.939 04:01:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3714663' 00:10:08.939 killing process with pid 3714663 00:10:08.939 04:01:23 -- common/autotest_common.sh@955 -- # kill 3714663 00:10:08.939 04:01:23 -- common/autotest_common.sh@960 -- # wait 3714663 00:10:09.197 04:01:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:09.197 04:01:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:09.197 04:01:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:09.197 04:01:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.197 04:01:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.197 04:01:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.197 04:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.197 04:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.735 04:01:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.735 00:10:11.735 real 0m42.412s 00:10:11.735 user 2m34.462s 00:10:11.735 sys 0m10.136s 00:10:11.735 04:01:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:11.735 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:10:11.735 ************************************ 00:10:11.735 END TEST nvmf_ns_hotplug_stress 00:10:11.735 ************************************ 00:10:11.735 04:01:25 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:11.735 04:01:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:11.735 04:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:11.735 04:01:25 -- common/autotest_common.sh@10 -- # set +x 00:10:11.735 ************************************ 00:10:11.735 START TEST nvmf_connect_stress 00:10:11.735 ************************************ 00:10:11.735 04:01:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:11.735 * Looking for test storage... 00:10:11.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.735 04:01:25 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.735 04:01:25 -- nvmf/common.sh@7 -- # uname -s 00:10:11.735 04:01:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.735 04:01:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.735 04:01:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.735 04:01:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.735 04:01:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.735 04:01:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.735 04:01:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.735 04:01:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.735 04:01:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.735 04:01:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.735 04:01:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:11.735 04:01:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:11.735 04:01:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.735 04:01:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.735 04:01:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.735 04:01:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.735 04:01:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.735 04:01:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.735 04:01:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.735 04:01:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.735 04:01:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.735 04:01:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.735 04:01:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.735 04:01:26 -- paths/export.sh@5 -- # export PATH 00:10:11.735 04:01:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.735 04:01:26 -- nvmf/common.sh@47 -- # : 0 00:10:11.735 04:01:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.735 04:01:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.735 04:01:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.735 04:01:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.735 04:01:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.735 04:01:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.735 04:01:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.735 04:01:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.735 04:01:26 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:11.735 04:01:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:11.735 04:01:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.735 04:01:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:11.735 04:01:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:11.735 04:01:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:11.735 04:01:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.735 04:01:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.735 04:01:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.735 04:01:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:11.735 04:01:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:11.735 04:01:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.735 04:01:26 -- common/autotest_common.sh@10 -- # set +x 00:10:17.072 04:01:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:17.072 04:01:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.072 04:01:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.072 04:01:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.072 04:01:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.072 04:01:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.072 04:01:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.072 04:01:31 -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.072 04:01:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.072 04:01:31 -- nvmf/common.sh@296 -- # e810=() 00:10:17.072 04:01:31 -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.072 04:01:31 -- nvmf/common.sh@297 -- # x722=() 00:10:17.072 04:01:31 -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.072 04:01:31 -- nvmf/common.sh@298 -- # mlx=() 00:10:17.072 04:01:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.072 04:01:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.072 04:01:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.072 04:01:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.072 04:01:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.072 04:01:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:17.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:17.072 04:01:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.072 04:01:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:17.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:17.072 04:01:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.072 04:01:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.072 04:01:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.072 04:01:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:17.072 Found net devices under 0000:af:00.0: cvl_0_0 00:10:17.072 04:01:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.072 04:01:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.072 04:01:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.072 04:01:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.072 04:01:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:17.072 Found net devices under 0000:af:00.1: cvl_0_1 00:10:17.072 04:01:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.072 04:01:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:17.072 04:01:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:17.072 04:01:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:17.072 04:01:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.072 04:01:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.072 04:01:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.072 04:01:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.072 04:01:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.073 04:01:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.073 04:01:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.073 04:01:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.073 04:01:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.073 04:01:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.073 04:01:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.073 04:01:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.073 04:01:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.073 04:01:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.073 04:01:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.073 04:01:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.073 04:01:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.332 04:01:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.332 04:01:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.332 04:01:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:17.332 00:10:17.332 --- 10.0.0.2 ping statistics --- 00:10:17.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.332 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:17.332 04:01:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:10:17.332 00:10:17.332 --- 10.0.0.1 ping statistics --- 00:10:17.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.332 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:17.332 04:01:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.332 04:01:31 -- nvmf/common.sh@411 -- # return 0 00:10:17.332 04:01:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:17.332 04:01:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.332 04:01:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:17.333 04:01:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:17.333 04:01:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.333 04:01:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:17.333 04:01:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:17.333 04:01:31 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:17.333 04:01:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:17.333 04:01:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:17.333 04:01:31 -- common/autotest_common.sh@10 -- # set +x 00:10:17.333 04:01:31 -- nvmf/common.sh@470 -- # nvmfpid=3724555 00:10:17.333 04:01:31 -- nvmf/common.sh@471 -- # waitforlisten 3724555 00:10:17.333 04:01:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:17.333 04:01:31 -- common/autotest_common.sh@817 -- # '[' -z 3724555 ']' 00:10:17.333 04:01:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.333 04:01:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.333 04:01:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.333 04:01:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.333 04:01:31 -- common/autotest_common.sh@10 -- # set +x 00:10:17.333 [2024-04-19 04:01:31.777128] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:17.333 [2024-04-19 04:01:31.777182] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.333 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.333 [2024-04-19 04:01:31.855920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.593 [2024-04-19 04:01:31.944505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.593 [2024-04-19 04:01:31.944550] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.593 [2024-04-19 04:01:31.944561] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.593 [2024-04-19 04:01:31.944571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.593 [2024-04-19 04:01:31.944582] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.593 [2024-04-19 04:01:31.944692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.593 [2024-04-19 04:01:31.944806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.593 [2024-04-19 04:01:31.944806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.593 04:01:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:17.593 04:01:32 -- common/autotest_common.sh@850 -- # return 0 00:10:17.593 04:01:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:17.593 04:01:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:17.593 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:17.593 04:01:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.593 04:01:32 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.593 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:17.593 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:17.593 [2024-04-19 04:01:32.085680] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.593 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:17.593 04:01:32 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.593 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:17.593 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:17.593 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:17.593 04:01:32 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.593 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:17.593 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:17.853 [2024-04-19 04:01:32.121505] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.853 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:17.853 04:01:32 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:17.853 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:17.853 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:17.853 NULL1 00:10:17.853 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:17.853 04:01:32 -- target/connect_stress.sh@21 -- # PERF_PID=3724798 00:10:17.853 04:01:32 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:17.853 04:01:32 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:17.853 04:01:32 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:17.853 04:01:32 -- target/connect_stress.sh@28 -- # cat 00:10:17.853 04:01:32 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:17.853 04:01:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.853 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:17.853 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.112 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.112 04:01:32 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:18.112 04:01:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.112 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.112 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.370 04:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.370 04:01:32 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:18.370 04:01:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.370 04:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.370 04:01:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.938 04:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.938 04:01:33 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:18.938 04:01:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:18.938 04:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.938 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.196 04:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.196 04:01:33 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:19.196 04:01:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.196 04:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.196 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.455 04:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.455 04:01:33 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:19.455 04:01:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.455 04:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.455 04:01:33 -- common/autotest_common.sh@10 -- # set +x 00:10:19.714 04:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.714 04:01:34 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:19.714 04:01:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.714 04:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.714 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:10:19.973 04:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:19.973 04:01:34 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:19.973 04:01:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.973 04:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:19.973 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.541 04:01:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.541 04:01:34 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:20.541 04:01:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.541 04:01:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.541 04:01:34 -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 04:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.800 04:01:35 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:20.800 04:01:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.800 04:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.800 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:10:21.059 04:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.059 04:01:35 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:21.059 04:01:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.059 04:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.059 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:10:21.317 04:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.317 04:01:35 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:21.317 04:01:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.317 04:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.317 04:01:35 -- common/autotest_common.sh@10 -- # set +x 00:10:21.575 04:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:21.575 04:01:36 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:21.575 04:01:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.575 04:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:21.575 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.142 04:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:22.142 04:01:36 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:22.142 04:01:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.142 04:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.142 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.401 04:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:22.401 04:01:36 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:22.401 04:01:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.401 04:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.401 04:01:36 -- common/autotest_common.sh@10 -- # set +x 00:10:22.659 04:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:22.659 04:01:37 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:22.659 04:01:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.659 04:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.659 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.918 04:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:22.918 04:01:37 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:22.918 04:01:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.918 04:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.918 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.484 04:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.484 04:01:37 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:23.484 04:01:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.484 04:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.484 04:01:37 -- common/autotest_common.sh@10 -- # set +x 00:10:23.742 04:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.742 04:01:38 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:23.742 04:01:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.742 04:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.742 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.001 04:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.001 04:01:38 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:24.001 04:01:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.001 04:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.001 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.259 04:01:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.259 04:01:38 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:24.259 04:01:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.259 04:01:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.259 04:01:38 -- common/autotest_common.sh@10 -- # set +x 00:10:24.518 04:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.518 04:01:39 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:24.518 04:01:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.518 04:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.518 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.085 04:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.085 04:01:39 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:25.085 04:01:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.085 04:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.085 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.344 04:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.344 04:01:39 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:25.344 04:01:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.344 04:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.344 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.663 04:01:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.663 04:01:39 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:25.663 04:01:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.663 04:01:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.663 04:01:39 -- common/autotest_common.sh@10 -- # set +x 00:10:25.921 04:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.922 04:01:40 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:25.922 04:01:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.922 04:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.922 04:01:40 -- common/autotest_common.sh@10 -- # set +x 00:10:26.180 04:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.180 04:01:40 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:26.180 04:01:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.180 04:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.180 04:01:40 -- common/autotest_common.sh@10 -- # set +x 00:10:26.746 04:01:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.746 04:01:40 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:26.746 04:01:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.746 04:01:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.746 04:01:40 -- common/autotest_common.sh@10 -- # set +x 00:10:27.004 04:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.004 04:01:41 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:27.004 04:01:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.004 04:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.004 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:10:27.262 04:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.262 04:01:41 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:27.262 04:01:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.262 04:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.262 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:10:27.521 04:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.521 04:01:41 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:27.521 04:01:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.521 04:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.521 04:01:41 -- common/autotest_common.sh@10 -- # set +x 00:10:27.779 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.779 04:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.779 04:01:42 -- target/connect_stress.sh@34 -- # kill -0 3724798 00:10:27.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3724798) - No such process 00:10:27.779 04:01:42 -- target/connect_stress.sh@38 -- # wait 3724798 00:10:27.779 04:01:42 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:27.779 04:01:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:27.779 04:01:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:27.779 04:01:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:27.779 04:01:42 -- nvmf/common.sh@117 -- # sync 00:10:27.779 04:01:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.779 04:01:42 -- nvmf/common.sh@120 -- # set +e 00:10:27.779 04:01:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.779 04:01:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.779 rmmod nvme_tcp 00:10:28.038 rmmod nvme_fabrics 00:10:28.038 rmmod nvme_keyring 00:10:28.038 04:01:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.038 04:01:42 -- nvmf/common.sh@124 -- # set -e 00:10:28.038 04:01:42 -- nvmf/common.sh@125 -- # return 0 00:10:28.038 04:01:42 -- nvmf/common.sh@478 -- # '[' -n 3724555 ']' 00:10:28.038 04:01:42 -- nvmf/common.sh@479 -- # killprocess 3724555 00:10:28.038 04:01:42 -- common/autotest_common.sh@936 -- # '[' -z 3724555 ']' 00:10:28.038 04:01:42 -- common/autotest_common.sh@940 -- # kill -0 3724555 00:10:28.038 04:01:42 -- common/autotest_common.sh@941 -- # uname 00:10:28.038 04:01:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.038 04:01:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3724555 00:10:28.038 04:01:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:28.038 04:01:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:28.038 04:01:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3724555' 00:10:28.038 killing process with pid 3724555 00:10:28.038 04:01:42 -- common/autotest_common.sh@955 -- # kill 3724555 00:10:28.038 04:01:42 -- common/autotest_common.sh@960 -- # wait 3724555 00:10:28.298 04:01:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:28.298 04:01:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:28.298 04:01:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:28.298 04:01:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.298 04:01:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.298 04:01:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.298 04:01:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.298 04:01:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.202 04:01:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.202 00:10:30.202 real 0m18.799s 00:10:30.202 user 0m39.419s 00:10:30.202 sys 0m8.082s 00:10:30.202 04:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.202 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.202 ************************************ 00:10:30.202 END TEST nvmf_connect_stress 00:10:30.202 ************************************ 00:10:30.462 04:01:44 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:30.462 04:01:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:30.462 04:01:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.462 04:01:44 -- common/autotest_common.sh@10 -- # set +x 00:10:30.462 ************************************ 00:10:30.462 START TEST nvmf_fused_ordering 00:10:30.462 ************************************ 00:10:30.462 04:01:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:30.462 * Looking for test storage... 00:10:30.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.462 04:01:44 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.462 04:01:44 -- nvmf/common.sh@7 -- # uname -s 00:10:30.462 04:01:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.462 04:01:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.462 04:01:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.462 04:01:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.462 04:01:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.462 04:01:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.462 04:01:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.462 04:01:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.462 04:01:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.462 04:01:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.462 04:01:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:30.462 04:01:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:30.462 04:01:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.462 04:01:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.462 04:01:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.462 04:01:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.462 04:01:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.721 04:01:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.721 04:01:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.721 04:01:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.721 04:01:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.721 04:01:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.721 04:01:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.721 04:01:44 -- paths/export.sh@5 -- # export PATH 00:10:30.722 04:01:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.722 04:01:44 -- nvmf/common.sh@47 -- # : 0 00:10:30.722 04:01:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.722 04:01:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.722 04:01:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.722 04:01:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.722 04:01:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.722 04:01:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.722 04:01:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.722 04:01:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.722 04:01:44 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:30.722 04:01:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:30.722 04:01:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.722 04:01:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:30.722 04:01:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:30.722 04:01:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:30.722 04:01:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.722 04:01:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.722 04:01:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.722 04:01:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:30.722 04:01:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:30.722 04:01:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.722 04:01:45 -- common/autotest_common.sh@10 -- # set +x 00:10:36.065 04:01:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:36.066 04:01:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.066 04:01:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.066 04:01:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.066 04:01:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.066 04:01:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.066 04:01:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.066 04:01:50 -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.066 04:01:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.066 04:01:50 -- nvmf/common.sh@296 -- # e810=() 00:10:36.066 04:01:50 -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.066 04:01:50 -- nvmf/common.sh@297 -- # x722=() 00:10:36.066 04:01:50 -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.066 04:01:50 -- nvmf/common.sh@298 -- # mlx=() 00:10:36.066 04:01:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.066 04:01:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.066 04:01:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.066 04:01:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.066 04:01:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.066 04:01:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:36.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:36.066 04:01:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.066 04:01:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:36.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:36.066 04:01:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.066 04:01:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.066 04:01:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.066 04:01:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:36.066 Found net devices under 0000:af:00.0: cvl_0_0 00:10:36.066 04:01:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.066 04:01:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.066 04:01:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.066 04:01:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.066 04:01:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:36.066 Found net devices under 0000:af:00.1: cvl_0_1 00:10:36.066 04:01:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.066 04:01:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:36.066 04:01:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:36.066 04:01:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:36.066 04:01:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.066 04:01:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.066 04:01:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.066 04:01:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.066 04:01:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.066 04:01:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.066 04:01:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.066 04:01:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.066 04:01:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.066 04:01:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.066 04:01:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.066 04:01:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.066 04:01:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.066 04:01:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.066 04:01:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.066 04:01:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.066 04:01:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.326 04:01:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.326 04:01:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.326 04:01:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:36.326 00:10:36.326 --- 10.0.0.2 ping statistics --- 00:10:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.326 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:36.326 04:01:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:10:36.326 00:10:36.326 --- 10.0.0.1 ping statistics --- 00:10:36.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.326 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:36.326 04:01:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.326 04:01:50 -- nvmf/common.sh@411 -- # return 0 00:10:36.326 04:01:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:36.326 04:01:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.326 04:01:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:36.326 04:01:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:36.326 04:01:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.326 04:01:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:36.326 04:01:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:36.326 04:01:50 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:36.326 04:01:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:36.326 04:01:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:36.326 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.326 04:01:50 -- nvmf/common.sh@470 -- # nvmfpid=3730207 00:10:36.326 04:01:50 -- nvmf/common.sh@471 -- # waitforlisten 3730207 00:10:36.326 04:01:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:36.326 04:01:50 -- common/autotest_common.sh@817 -- # '[' -z 3730207 ']' 00:10:36.326 04:01:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.326 04:01:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:36.326 04:01:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.326 04:01:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:36.326 04:01:50 -- common/autotest_common.sh@10 -- # set +x 00:10:36.326 [2024-04-19 04:01:50.747198] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:36.326 [2024-04-19 04:01:50.747251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.326 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.326 [2024-04-19 04:01:50.825037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.586 [2024-04-19 04:01:50.915132] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.586 [2024-04-19 04:01:50.915179] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.586 [2024-04-19 04:01:50.915190] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.586 [2024-04-19 04:01:50.915199] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.586 [2024-04-19 04:01:50.915206] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.586 [2024-04-19 04:01:50.915228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.586 04:01:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:36.586 04:01:51 -- common/autotest_common.sh@850 -- # return 0 00:10:36.586 04:01:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:36.586 04:01:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 04:01:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.586 04:01:51 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 [2024-04-19 04:01:51.049908] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 [2024-04-19 04:01:51.070065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 NULL1 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:36.586 04:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:36.586 04:01:51 -- common/autotest_common.sh@10 -- # set +x 00:10:36.586 04:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:36.586 04:01:51 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:36.845 [2024-04-19 04:01:51.123710] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:36.845 [2024-04-19 04:01:51.123753] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730399 ] 00:10:36.845 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.104 Attached to nqn.2016-06.io.spdk:cnode1 00:10:37.104 Namespace ID: 1 size: 1GB 00:10:37.104 fused_ordering(0) 00:10:37.104 fused_ordering(1) 00:10:37.104 fused_ordering(2) 00:10:37.104 fused_ordering(3) 00:10:37.104 fused_ordering(4) 00:10:37.104 fused_ordering(5) 00:10:37.104 fused_ordering(6) 00:10:37.104 fused_ordering(7) 00:10:37.104 fused_ordering(8) 00:10:37.104 fused_ordering(9) 00:10:37.104 fused_ordering(10) 00:10:37.104 fused_ordering(11) 00:10:37.104 fused_ordering(12) 00:10:37.104 fused_ordering(13) 00:10:37.104 fused_ordering(14) 00:10:37.104 fused_ordering(15) 00:10:37.104 fused_ordering(16) 00:10:37.104 fused_ordering(17) 00:10:37.104 fused_ordering(18) 00:10:37.104 fused_ordering(19) 00:10:37.104 fused_ordering(20) 00:10:37.104 fused_ordering(21) 00:10:37.104 fused_ordering(22) 00:10:37.104 fused_ordering(23) 00:10:37.104 fused_ordering(24) 00:10:37.104 fused_ordering(25) 00:10:37.104 fused_ordering(26) 00:10:37.104 fused_ordering(27) 00:10:37.104 fused_ordering(28) 00:10:37.104 fused_ordering(29) 00:10:37.104 fused_ordering(30) 00:10:37.104 fused_ordering(31) 00:10:37.104 fused_ordering(32) 00:10:37.104 fused_ordering(33) 00:10:37.104 fused_ordering(34) 00:10:37.104 fused_ordering(35) 00:10:37.105 fused_ordering(36) 00:10:37.105 fused_ordering(37) 00:10:37.105 fused_ordering(38) 00:10:37.105 fused_ordering(39) 00:10:37.105 fused_ordering(40) 00:10:37.105 fused_ordering(41) 00:10:37.105 fused_ordering(42) 00:10:37.105 fused_ordering(43) 00:10:37.105 fused_ordering(44) 00:10:37.105 fused_ordering(45) 00:10:37.105 fused_ordering(46) 00:10:37.105 fused_ordering(47) 00:10:37.105 fused_ordering(48) 00:10:37.105 fused_ordering(49) 00:10:37.105 fused_ordering(50) 00:10:37.105 fused_ordering(51) 00:10:37.105 fused_ordering(52) 00:10:37.105 fused_ordering(53) 00:10:37.105 fused_ordering(54) 00:10:37.105 fused_ordering(55) 00:10:37.105 fused_ordering(56) 00:10:37.105 fused_ordering(57) 00:10:37.105 fused_ordering(58) 00:10:37.105 fused_ordering(59) 00:10:37.105 fused_ordering(60) 00:10:37.105 fused_ordering(61) 00:10:37.105 fused_ordering(62) 00:10:37.105 fused_ordering(63) 00:10:37.105 fused_ordering(64) 00:10:37.105 fused_ordering(65) 00:10:37.105 fused_ordering(66) 00:10:37.105 fused_ordering(67) 00:10:37.105 fused_ordering(68) 00:10:37.105 fused_ordering(69) 00:10:37.105 fused_ordering(70) 00:10:37.105 fused_ordering(71) 00:10:37.105 fused_ordering(72) 00:10:37.105 fused_ordering(73) 00:10:37.105 fused_ordering(74) 00:10:37.105 fused_ordering(75) 00:10:37.105 fused_ordering(76) 00:10:37.105 fused_ordering(77) 00:10:37.105 fused_ordering(78) 00:10:37.105 fused_ordering(79) 00:10:37.105 fused_ordering(80) 00:10:37.105 fused_ordering(81) 00:10:37.105 fused_ordering(82) 00:10:37.105 fused_ordering(83) 00:10:37.105 fused_ordering(84) 00:10:37.105 fused_ordering(85) 00:10:37.105 fused_ordering(86) 00:10:37.105 fused_ordering(87) 00:10:37.105 fused_ordering(88) 00:10:37.105 fused_ordering(89) 00:10:37.105 fused_ordering(90) 00:10:37.105 fused_ordering(91) 00:10:37.105 fused_ordering(92) 00:10:37.105 fused_ordering(93) 00:10:37.105 fused_ordering(94) 00:10:37.105 fused_ordering(95) 00:10:37.105 fused_ordering(96) 00:10:37.105 fused_ordering(97) 00:10:37.105 fused_ordering(98) 00:10:37.105 fused_ordering(99) 00:10:37.105 fused_ordering(100) 00:10:37.105 fused_ordering(101) 00:10:37.105 fused_ordering(102) 00:10:37.105 fused_ordering(103) 00:10:37.105 fused_ordering(104) 00:10:37.105 fused_ordering(105) 00:10:37.105 fused_ordering(106) 00:10:37.105 fused_ordering(107) 00:10:37.105 fused_ordering(108) 00:10:37.105 fused_ordering(109) 00:10:37.105 fused_ordering(110) 00:10:37.105 fused_ordering(111) 00:10:37.105 fused_ordering(112) 00:10:37.105 fused_ordering(113) 00:10:37.105 fused_ordering(114) 00:10:37.105 fused_ordering(115) 00:10:37.105 fused_ordering(116) 00:10:37.105 fused_ordering(117) 00:10:37.105 fused_ordering(118) 00:10:37.105 fused_ordering(119) 00:10:37.105 fused_ordering(120) 00:10:37.105 fused_ordering(121) 00:10:37.105 fused_ordering(122) 00:10:37.105 fused_ordering(123) 00:10:37.105 fused_ordering(124) 00:10:37.105 fused_ordering(125) 00:10:37.105 fused_ordering(126) 00:10:37.105 fused_ordering(127) 00:10:37.105 fused_ordering(128) 00:10:37.105 fused_ordering(129) 00:10:37.105 fused_ordering(130) 00:10:37.105 fused_ordering(131) 00:10:37.105 fused_ordering(132) 00:10:37.105 fused_ordering(133) 00:10:37.105 fused_ordering(134) 00:10:37.105 fused_ordering(135) 00:10:37.105 fused_ordering(136) 00:10:37.105 fused_ordering(137) 00:10:37.105 fused_ordering(138) 00:10:37.105 fused_ordering(139) 00:10:37.105 fused_ordering(140) 00:10:37.105 fused_ordering(141) 00:10:37.105 fused_ordering(142) 00:10:37.105 fused_ordering(143) 00:10:37.105 fused_ordering(144) 00:10:37.105 fused_ordering(145) 00:10:37.105 fused_ordering(146) 00:10:37.105 fused_ordering(147) 00:10:37.105 fused_ordering(148) 00:10:37.105 fused_ordering(149) 00:10:37.105 fused_ordering(150) 00:10:37.105 fused_ordering(151) 00:10:37.105 fused_ordering(152) 00:10:37.105 fused_ordering(153) 00:10:37.105 fused_ordering(154) 00:10:37.105 fused_ordering(155) 00:10:37.105 fused_ordering(156) 00:10:37.105 fused_ordering(157) 00:10:37.105 fused_ordering(158) 00:10:37.105 fused_ordering(159) 00:10:37.105 fused_ordering(160) 00:10:37.105 fused_ordering(161) 00:10:37.105 fused_ordering(162) 00:10:37.105 fused_ordering(163) 00:10:37.105 fused_ordering(164) 00:10:37.105 fused_ordering(165) 00:10:37.105 fused_ordering(166) 00:10:37.105 fused_ordering(167) 00:10:37.105 fused_ordering(168) 00:10:37.105 fused_ordering(169) 00:10:37.105 fused_ordering(170) 00:10:37.105 fused_ordering(171) 00:10:37.105 fused_ordering(172) 00:10:37.105 fused_ordering(173) 00:10:37.105 fused_ordering(174) 00:10:37.105 fused_ordering(175) 00:10:37.105 fused_ordering(176) 00:10:37.105 fused_ordering(177) 00:10:37.105 fused_ordering(178) 00:10:37.105 fused_ordering(179) 00:10:37.105 fused_ordering(180) 00:10:37.105 fused_ordering(181) 00:10:37.105 fused_ordering(182) 00:10:37.105 fused_ordering(183) 00:10:37.105 fused_ordering(184) 00:10:37.105 fused_ordering(185) 00:10:37.105 fused_ordering(186) 00:10:37.105 fused_ordering(187) 00:10:37.105 fused_ordering(188) 00:10:37.105 fused_ordering(189) 00:10:37.105 fused_ordering(190) 00:10:37.105 fused_ordering(191) 00:10:37.105 fused_ordering(192) 00:10:37.105 fused_ordering(193) 00:10:37.105 fused_ordering(194) 00:10:37.105 fused_ordering(195) 00:10:37.105 fused_ordering(196) 00:10:37.105 fused_ordering(197) 00:10:37.105 fused_ordering(198) 00:10:37.105 fused_ordering(199) 00:10:37.105 fused_ordering(200) 00:10:37.105 fused_ordering(201) 00:10:37.105 fused_ordering(202) 00:10:37.105 fused_ordering(203) 00:10:37.105 fused_ordering(204) 00:10:37.105 fused_ordering(205) 00:10:37.673 fused_ordering(206) 00:10:37.673 fused_ordering(207) 00:10:37.673 fused_ordering(208) 00:10:37.673 fused_ordering(209) 00:10:37.673 fused_ordering(210) 00:10:37.673 fused_ordering(211) 00:10:37.673 fused_ordering(212) 00:10:37.673 fused_ordering(213) 00:10:37.673 fused_ordering(214) 00:10:37.673 fused_ordering(215) 00:10:37.673 fused_ordering(216) 00:10:37.673 fused_ordering(217) 00:10:37.673 fused_ordering(218) 00:10:37.673 fused_ordering(219) 00:10:37.673 fused_ordering(220) 00:10:37.673 fused_ordering(221) 00:10:37.673 fused_ordering(222) 00:10:37.673 fused_ordering(223) 00:10:37.673 fused_ordering(224) 00:10:37.673 fused_ordering(225) 00:10:37.673 fused_ordering(226) 00:10:37.673 fused_ordering(227) 00:10:37.673 fused_ordering(228) 00:10:37.673 fused_ordering(229) 00:10:37.673 fused_ordering(230) 00:10:37.673 fused_ordering(231) 00:10:37.673 fused_ordering(232) 00:10:37.673 fused_ordering(233) 00:10:37.673 fused_ordering(234) 00:10:37.673 fused_ordering(235) 00:10:37.673 fused_ordering(236) 00:10:37.673 fused_ordering(237) 00:10:37.673 fused_ordering(238) 00:10:37.673 fused_ordering(239) 00:10:37.673 fused_ordering(240) 00:10:37.673 fused_ordering(241) 00:10:37.673 fused_ordering(242) 00:10:37.673 fused_ordering(243) 00:10:37.673 fused_ordering(244) 00:10:37.673 fused_ordering(245) 00:10:37.673 fused_ordering(246) 00:10:37.673 fused_ordering(247) 00:10:37.673 fused_ordering(248) 00:10:37.673 fused_ordering(249) 00:10:37.673 fused_ordering(250) 00:10:37.673 fused_ordering(251) 00:10:37.673 fused_ordering(252) 00:10:37.673 fused_ordering(253) 00:10:37.673 fused_ordering(254) 00:10:37.673 fused_ordering(255) 00:10:37.673 fused_ordering(256) 00:10:37.673 fused_ordering(257) 00:10:37.673 fused_ordering(258) 00:10:37.673 fused_ordering(259) 00:10:37.673 fused_ordering(260) 00:10:37.673 fused_ordering(261) 00:10:37.673 fused_ordering(262) 00:10:37.673 fused_ordering(263) 00:10:37.673 fused_ordering(264) 00:10:37.673 fused_ordering(265) 00:10:37.673 fused_ordering(266) 00:10:37.673 fused_ordering(267) 00:10:37.674 fused_ordering(268) 00:10:37.674 fused_ordering(269) 00:10:37.674 fused_ordering(270) 00:10:37.674 fused_ordering(271) 00:10:37.674 fused_ordering(272) 00:10:37.674 fused_ordering(273) 00:10:37.674 fused_ordering(274) 00:10:37.674 fused_ordering(275) 00:10:37.674 fused_ordering(276) 00:10:37.674 fused_ordering(277) 00:10:37.674 fused_ordering(278) 00:10:37.674 fused_ordering(279) 00:10:37.674 fused_ordering(280) 00:10:37.674 fused_ordering(281) 00:10:37.674 fused_ordering(282) 00:10:37.674 fused_ordering(283) 00:10:37.674 fused_ordering(284) 00:10:37.674 fused_ordering(285) 00:10:37.674 fused_ordering(286) 00:10:37.674 fused_ordering(287) 00:10:37.674 fused_ordering(288) 00:10:37.674 fused_ordering(289) 00:10:37.674 fused_ordering(290) 00:10:37.674 fused_ordering(291) 00:10:37.674 fused_ordering(292) 00:10:37.674 fused_ordering(293) 00:10:37.674 fused_ordering(294) 00:10:37.674 fused_ordering(295) 00:10:37.674 fused_ordering(296) 00:10:37.674 fused_ordering(297) 00:10:37.674 fused_ordering(298) 00:10:37.674 fused_ordering(299) 00:10:37.674 fused_ordering(300) 00:10:37.674 fused_ordering(301) 00:10:37.674 fused_ordering(302) 00:10:37.674 fused_ordering(303) 00:10:37.674 fused_ordering(304) 00:10:37.674 fused_ordering(305) 00:10:37.674 fused_ordering(306) 00:10:37.674 fused_ordering(307) 00:10:37.674 fused_ordering(308) 00:10:37.674 fused_ordering(309) 00:10:37.674 fused_ordering(310) 00:10:37.674 fused_ordering(311) 00:10:37.674 fused_ordering(312) 00:10:37.674 fused_ordering(313) 00:10:37.674 fused_ordering(314) 00:10:37.674 fused_ordering(315) 00:10:37.674 fused_ordering(316) 00:10:37.674 fused_ordering(317) 00:10:37.674 fused_ordering(318) 00:10:37.674 fused_ordering(319) 00:10:37.674 fused_ordering(320) 00:10:37.674 fused_ordering(321) 00:10:37.674 fused_ordering(322) 00:10:37.674 fused_ordering(323) 00:10:37.674 fused_ordering(324) 00:10:37.674 fused_ordering(325) 00:10:37.674 fused_ordering(326) 00:10:37.674 fused_ordering(327) 00:10:37.674 fused_ordering(328) 00:10:37.674 fused_ordering(329) 00:10:37.674 fused_ordering(330) 00:10:37.674 fused_ordering(331) 00:10:37.674 fused_ordering(332) 00:10:37.674 fused_ordering(333) 00:10:37.674 fused_ordering(334) 00:10:37.674 fused_ordering(335) 00:10:37.674 fused_ordering(336) 00:10:37.674 fused_ordering(337) 00:10:37.674 fused_ordering(338) 00:10:37.674 fused_ordering(339) 00:10:37.674 fused_ordering(340) 00:10:37.674 fused_ordering(341) 00:10:37.674 fused_ordering(342) 00:10:37.674 fused_ordering(343) 00:10:37.674 fused_ordering(344) 00:10:37.674 fused_ordering(345) 00:10:37.674 fused_ordering(346) 00:10:37.674 fused_ordering(347) 00:10:37.674 fused_ordering(348) 00:10:37.674 fused_ordering(349) 00:10:37.674 fused_ordering(350) 00:10:37.674 fused_ordering(351) 00:10:37.674 fused_ordering(352) 00:10:37.674 fused_ordering(353) 00:10:37.674 fused_ordering(354) 00:10:37.674 fused_ordering(355) 00:10:37.674 fused_ordering(356) 00:10:37.674 fused_ordering(357) 00:10:37.674 fused_ordering(358) 00:10:37.674 fused_ordering(359) 00:10:37.674 fused_ordering(360) 00:10:37.674 fused_ordering(361) 00:10:37.674 fused_ordering(362) 00:10:37.674 fused_ordering(363) 00:10:37.674 fused_ordering(364) 00:10:37.674 fused_ordering(365) 00:10:37.674 fused_ordering(366) 00:10:37.674 fused_ordering(367) 00:10:37.674 fused_ordering(368) 00:10:37.674 fused_ordering(369) 00:10:37.674 fused_ordering(370) 00:10:37.674 fused_ordering(371) 00:10:37.674 fused_ordering(372) 00:10:37.674 fused_ordering(373) 00:10:37.674 fused_ordering(374) 00:10:37.674 fused_ordering(375) 00:10:37.674 fused_ordering(376) 00:10:37.674 fused_ordering(377) 00:10:37.674 fused_ordering(378) 00:10:37.674 fused_ordering(379) 00:10:37.674 fused_ordering(380) 00:10:37.674 fused_ordering(381) 00:10:37.674 fused_ordering(382) 00:10:37.674 fused_ordering(383) 00:10:37.674 fused_ordering(384) 00:10:37.674 fused_ordering(385) 00:10:37.674 fused_ordering(386) 00:10:37.674 fused_ordering(387) 00:10:37.674 fused_ordering(388) 00:10:37.674 fused_ordering(389) 00:10:37.674 fused_ordering(390) 00:10:37.674 fused_ordering(391) 00:10:37.674 fused_ordering(392) 00:10:37.674 fused_ordering(393) 00:10:37.674 fused_ordering(394) 00:10:37.674 fused_ordering(395) 00:10:37.674 fused_ordering(396) 00:10:37.674 fused_ordering(397) 00:10:37.674 fused_ordering(398) 00:10:37.674 fused_ordering(399) 00:10:37.674 fused_ordering(400) 00:10:37.674 fused_ordering(401) 00:10:37.674 fused_ordering(402) 00:10:37.674 fused_ordering(403) 00:10:37.674 fused_ordering(404) 00:10:37.674 fused_ordering(405) 00:10:37.674 fused_ordering(406) 00:10:37.674 fused_ordering(407) 00:10:37.674 fused_ordering(408) 00:10:37.674 fused_ordering(409) 00:10:37.674 fused_ordering(410) 00:10:37.933 fused_ordering(411) 00:10:37.933 fused_ordering(412) 00:10:37.933 fused_ordering(413) 00:10:37.933 fused_ordering(414) 00:10:37.933 fused_ordering(415) 00:10:37.933 fused_ordering(416) 00:10:37.933 fused_ordering(417) 00:10:37.933 fused_ordering(418) 00:10:37.933 fused_ordering(419) 00:10:37.933 fused_ordering(420) 00:10:37.933 fused_ordering(421) 00:10:37.933 fused_ordering(422) 00:10:37.933 fused_ordering(423) 00:10:37.933 fused_ordering(424) 00:10:37.933 fused_ordering(425) 00:10:37.933 fused_ordering(426) 00:10:37.933 fused_ordering(427) 00:10:37.933 fused_ordering(428) 00:10:37.933 fused_ordering(429) 00:10:37.933 fused_ordering(430) 00:10:37.933 fused_ordering(431) 00:10:37.933 fused_ordering(432) 00:10:37.933 fused_ordering(433) 00:10:37.933 fused_ordering(434) 00:10:37.933 fused_ordering(435) 00:10:37.934 fused_ordering(436) 00:10:37.934 fused_ordering(437) 00:10:37.934 fused_ordering(438) 00:10:37.934 fused_ordering(439) 00:10:37.934 fused_ordering(440) 00:10:37.934 fused_ordering(441) 00:10:37.934 fused_ordering(442) 00:10:37.934 fused_ordering(443) 00:10:37.934 fused_ordering(444) 00:10:37.934 fused_ordering(445) 00:10:37.934 fused_ordering(446) 00:10:37.934 fused_ordering(447) 00:10:37.934 fused_ordering(448) 00:10:37.934 fused_ordering(449) 00:10:37.934 fused_ordering(450) 00:10:37.934 fused_ordering(451) 00:10:37.934 fused_ordering(452) 00:10:37.934 fused_ordering(453) 00:10:37.934 fused_ordering(454) 00:10:37.934 fused_ordering(455) 00:10:37.934 fused_ordering(456) 00:10:37.934 fused_ordering(457) 00:10:37.934 fused_ordering(458) 00:10:37.934 fused_ordering(459) 00:10:37.934 fused_ordering(460) 00:10:37.934 fused_ordering(461) 00:10:37.934 fused_ordering(462) 00:10:37.934 fused_ordering(463) 00:10:37.934 fused_ordering(464) 00:10:37.934 fused_ordering(465) 00:10:37.934 fused_ordering(466) 00:10:37.934 fused_ordering(467) 00:10:37.934 fused_ordering(468) 00:10:37.934 fused_ordering(469) 00:10:37.934 fused_ordering(470) 00:10:37.934 fused_ordering(471) 00:10:37.934 fused_ordering(472) 00:10:37.934 fused_ordering(473) 00:10:37.934 fused_ordering(474) 00:10:37.934 fused_ordering(475) 00:10:37.934 fused_ordering(476) 00:10:37.934 fused_ordering(477) 00:10:37.934 fused_ordering(478) 00:10:37.934 fused_ordering(479) 00:10:37.934 fused_ordering(480) 00:10:37.934 fused_ordering(481) 00:10:37.934 fused_ordering(482) 00:10:37.934 fused_ordering(483) 00:10:37.934 fused_ordering(484) 00:10:37.934 fused_ordering(485) 00:10:37.934 fused_ordering(486) 00:10:37.934 fused_ordering(487) 00:10:37.934 fused_ordering(488) 00:10:37.934 fused_ordering(489) 00:10:37.934 fused_ordering(490) 00:10:37.934 fused_ordering(491) 00:10:37.934 fused_ordering(492) 00:10:37.934 fused_ordering(493) 00:10:37.934 fused_ordering(494) 00:10:37.934 fused_ordering(495) 00:10:37.934 fused_ordering(496) 00:10:37.934 fused_ordering(497) 00:10:37.934 fused_ordering(498) 00:10:37.934 fused_ordering(499) 00:10:37.934 fused_ordering(500) 00:10:37.934 fused_ordering(501) 00:10:37.934 fused_ordering(502) 00:10:37.934 fused_ordering(503) 00:10:37.934 fused_ordering(504) 00:10:37.934 fused_ordering(505) 00:10:37.934 fused_ordering(506) 00:10:37.934 fused_ordering(507) 00:10:37.934 fused_ordering(508) 00:10:37.934 fused_ordering(509) 00:10:37.934 fused_ordering(510) 00:10:37.934 fused_ordering(511) 00:10:37.934 fused_ordering(512) 00:10:37.934 fused_ordering(513) 00:10:37.934 fused_ordering(514) 00:10:37.934 fused_ordering(515) 00:10:37.934 fused_ordering(516) 00:10:37.934 fused_ordering(517) 00:10:37.934 fused_ordering(518) 00:10:37.934 fused_ordering(519) 00:10:37.934 fused_ordering(520) 00:10:37.934 fused_ordering(521) 00:10:37.934 fused_ordering(522) 00:10:37.934 fused_ordering(523) 00:10:37.934 fused_ordering(524) 00:10:37.934 fused_ordering(525) 00:10:37.934 fused_ordering(526) 00:10:37.934 fused_ordering(527) 00:10:37.934 fused_ordering(528) 00:10:37.934 fused_ordering(529) 00:10:37.934 fused_ordering(530) 00:10:37.934 fused_ordering(531) 00:10:37.934 fused_ordering(532) 00:10:37.934 fused_ordering(533) 00:10:37.934 fused_ordering(534) 00:10:37.934 fused_ordering(535) 00:10:37.934 fused_ordering(536) 00:10:37.934 fused_ordering(537) 00:10:37.934 fused_ordering(538) 00:10:37.934 fused_ordering(539) 00:10:37.934 fused_ordering(540) 00:10:37.934 fused_ordering(541) 00:10:37.934 fused_ordering(542) 00:10:37.934 fused_ordering(543) 00:10:37.934 fused_ordering(544) 00:10:37.934 fused_ordering(545) 00:10:37.934 fused_ordering(546) 00:10:37.934 fused_ordering(547) 00:10:37.934 fused_ordering(548) 00:10:37.934 fused_ordering(549) 00:10:37.934 fused_ordering(550) 00:10:37.934 fused_ordering(551) 00:10:37.934 fused_ordering(552) 00:10:37.934 fused_ordering(553) 00:10:37.934 fused_ordering(554) 00:10:37.934 fused_ordering(555) 00:10:37.934 fused_ordering(556) 00:10:37.934 fused_ordering(557) 00:10:37.934 fused_ordering(558) 00:10:37.934 fused_ordering(559) 00:10:37.934 fused_ordering(560) 00:10:37.934 fused_ordering(561) 00:10:37.934 fused_ordering(562) 00:10:37.934 fused_ordering(563) 00:10:37.934 fused_ordering(564) 00:10:37.934 fused_ordering(565) 00:10:37.934 fused_ordering(566) 00:10:37.934 fused_ordering(567) 00:10:37.934 fused_ordering(568) 00:10:37.934 fused_ordering(569) 00:10:37.934 fused_ordering(570) 00:10:37.934 fused_ordering(571) 00:10:37.934 fused_ordering(572) 00:10:37.934 fused_ordering(573) 00:10:37.934 fused_ordering(574) 00:10:37.934 fused_ordering(575) 00:10:37.934 fused_ordering(576) 00:10:37.934 fused_ordering(577) 00:10:37.934 fused_ordering(578) 00:10:37.934 fused_ordering(579) 00:10:37.934 fused_ordering(580) 00:10:37.934 fused_ordering(581) 00:10:37.934 fused_ordering(582) 00:10:37.934 fused_ordering(583) 00:10:37.934 fused_ordering(584) 00:10:37.934 fused_ordering(585) 00:10:37.934 fused_ordering(586) 00:10:37.934 fused_ordering(587) 00:10:37.934 fused_ordering(588) 00:10:37.934 fused_ordering(589) 00:10:37.934 fused_ordering(590) 00:10:37.934 fused_ordering(591) 00:10:37.934 fused_ordering(592) 00:10:37.934 fused_ordering(593) 00:10:37.934 fused_ordering(594) 00:10:37.934 fused_ordering(595) 00:10:37.934 fused_ordering(596) 00:10:37.934 fused_ordering(597) 00:10:37.934 fused_ordering(598) 00:10:37.934 fused_ordering(599) 00:10:37.934 fused_ordering(600) 00:10:37.934 fused_ordering(601) 00:10:37.934 fused_ordering(602) 00:10:37.934 fused_ordering(603) 00:10:37.934 fused_ordering(604) 00:10:37.934 fused_ordering(605) 00:10:37.934 fused_ordering(606) 00:10:37.934 fused_ordering(607) 00:10:37.934 fused_ordering(608) 00:10:37.934 fused_ordering(609) 00:10:37.934 fused_ordering(610) 00:10:37.934 fused_ordering(611) 00:10:37.934 fused_ordering(612) 00:10:37.934 fused_ordering(613) 00:10:37.934 fused_ordering(614) 00:10:37.934 fused_ordering(615) 00:10:38.506 fused_ordering(616) 00:10:38.506 fused_ordering(617) 00:10:38.506 fused_ordering(618) 00:10:38.506 fused_ordering(619) 00:10:38.506 fused_ordering(620) 00:10:38.506 fused_ordering(621) 00:10:38.506 fused_ordering(622) 00:10:38.506 fused_ordering(623) 00:10:38.506 fused_ordering(624) 00:10:38.506 fused_ordering(625) 00:10:38.506 fused_ordering(626) 00:10:38.506 fused_ordering(627) 00:10:38.506 fused_ordering(628) 00:10:38.506 fused_ordering(629) 00:10:38.506 fused_ordering(630) 00:10:38.506 fused_ordering(631) 00:10:38.506 fused_ordering(632) 00:10:38.506 fused_ordering(633) 00:10:38.506 fused_ordering(634) 00:10:38.506 fused_ordering(635) 00:10:38.506 fused_ordering(636) 00:10:38.506 fused_ordering(637) 00:10:38.506 fused_ordering(638) 00:10:38.506 fused_ordering(639) 00:10:38.506 fused_ordering(640) 00:10:38.506 fused_ordering(641) 00:10:38.506 fused_ordering(642) 00:10:38.506 fused_ordering(643) 00:10:38.506 fused_ordering(644) 00:10:38.506 fused_ordering(645) 00:10:38.506 fused_ordering(646) 00:10:38.506 fused_ordering(647) 00:10:38.506 fused_ordering(648) 00:10:38.506 fused_ordering(649) 00:10:38.506 fused_ordering(650) 00:10:38.506 fused_ordering(651) 00:10:38.506 fused_ordering(652) 00:10:38.506 fused_ordering(653) 00:10:38.506 fused_ordering(654) 00:10:38.506 fused_ordering(655) 00:10:38.506 fused_ordering(656) 00:10:38.506 fused_ordering(657) 00:10:38.506 fused_ordering(658) 00:10:38.506 fused_ordering(659) 00:10:38.506 fused_ordering(660) 00:10:38.506 fused_ordering(661) 00:10:38.506 fused_ordering(662) 00:10:38.506 fused_ordering(663) 00:10:38.506 fused_ordering(664) 00:10:38.506 fused_ordering(665) 00:10:38.506 fused_ordering(666) 00:10:38.506 fused_ordering(667) 00:10:38.506 fused_ordering(668) 00:10:38.506 fused_ordering(669) 00:10:38.506 fused_ordering(670) 00:10:38.506 fused_ordering(671) 00:10:38.506 fused_ordering(672) 00:10:38.506 fused_ordering(673) 00:10:38.506 fused_ordering(674) 00:10:38.506 fused_ordering(675) 00:10:38.506 fused_ordering(676) 00:10:38.506 fused_ordering(677) 00:10:38.506 fused_ordering(678) 00:10:38.506 fused_ordering(679) 00:10:38.506 fused_ordering(680) 00:10:38.506 fused_ordering(681) 00:10:38.506 fused_ordering(682) 00:10:38.506 fused_ordering(683) 00:10:38.506 fused_ordering(684) 00:10:38.506 fused_ordering(685) 00:10:38.506 fused_ordering(686) 00:10:38.506 fused_ordering(687) 00:10:38.506 fused_ordering(688) 00:10:38.506 fused_ordering(689) 00:10:38.506 fused_ordering(690) 00:10:38.506 fused_ordering(691) 00:10:38.506 fused_ordering(692) 00:10:38.506 fused_ordering(693) 00:10:38.506 fused_ordering(694) 00:10:38.506 fused_ordering(695) 00:10:38.506 fused_ordering(696) 00:10:38.506 fused_ordering(697) 00:10:38.506 fused_ordering(698) 00:10:38.506 fused_ordering(699) 00:10:38.506 fused_ordering(700) 00:10:38.506 fused_ordering(701) 00:10:38.506 fused_ordering(702) 00:10:38.506 fused_ordering(703) 00:10:38.506 fused_ordering(704) 00:10:38.506 fused_ordering(705) 00:10:38.506 fused_ordering(706) 00:10:38.506 fused_ordering(707) 00:10:38.506 fused_ordering(708) 00:10:38.506 fused_ordering(709) 00:10:38.506 fused_ordering(710) 00:10:38.506 fused_ordering(711) 00:10:38.506 fused_ordering(712) 00:10:38.506 fused_ordering(713) 00:10:38.506 fused_ordering(714) 00:10:38.506 fused_ordering(715) 00:10:38.506 fused_ordering(716) 00:10:38.506 fused_ordering(717) 00:10:38.506 fused_ordering(718) 00:10:38.506 fused_ordering(719) 00:10:38.506 fused_ordering(720) 00:10:38.506 fused_ordering(721) 00:10:38.506 fused_ordering(722) 00:10:38.506 fused_ordering(723) 00:10:38.506 fused_ordering(724) 00:10:38.506 fused_ordering(725) 00:10:38.506 fused_ordering(726) 00:10:38.506 fused_ordering(727) 00:10:38.506 fused_ordering(728) 00:10:38.506 fused_ordering(729) 00:10:38.506 fused_ordering(730) 00:10:38.506 fused_ordering(731) 00:10:38.506 fused_ordering(732) 00:10:38.506 fused_ordering(733) 00:10:38.506 fused_ordering(734) 00:10:38.506 fused_ordering(735) 00:10:38.506 fused_ordering(736) 00:10:38.506 fused_ordering(737) 00:10:38.506 fused_ordering(738) 00:10:38.506 fused_ordering(739) 00:10:38.506 fused_ordering(740) 00:10:38.506 fused_ordering(741) 00:10:38.506 fused_ordering(742) 00:10:38.506 fused_ordering(743) 00:10:38.506 fused_ordering(744) 00:10:38.506 fused_ordering(745) 00:10:38.506 fused_ordering(746) 00:10:38.506 fused_ordering(747) 00:10:38.506 fused_ordering(748) 00:10:38.506 fused_ordering(749) 00:10:38.506 fused_ordering(750) 00:10:38.506 fused_ordering(751) 00:10:38.506 fused_ordering(752) 00:10:38.506 fused_ordering(753) 00:10:38.506 fused_ordering(754) 00:10:38.506 fused_ordering(755) 00:10:38.506 fused_ordering(756) 00:10:38.506 fused_ordering(757) 00:10:38.506 fused_ordering(758) 00:10:38.506 fused_ordering(759) 00:10:38.506 fused_ordering(760) 00:10:38.506 fused_ordering(761) 00:10:38.506 fused_ordering(762) 00:10:38.506 fused_ordering(763) 00:10:38.506 fused_ordering(764) 00:10:38.506 fused_ordering(765) 00:10:38.507 fused_ordering(766) 00:10:38.507 fused_ordering(767) 00:10:38.507 fused_ordering(768) 00:10:38.507 fused_ordering(769) 00:10:38.507 fused_ordering(770) 00:10:38.507 fused_ordering(771) 00:10:38.507 fused_ordering(772) 00:10:38.507 fused_ordering(773) 00:10:38.507 fused_ordering(774) 00:10:38.507 fused_ordering(775) 00:10:38.507 fused_ordering(776) 00:10:38.507 fused_ordering(777) 00:10:38.507 fused_ordering(778) 00:10:38.507 fused_ordering(779) 00:10:38.507 fused_ordering(780) 00:10:38.507 fused_ordering(781) 00:10:38.507 fused_ordering(782) 00:10:38.507 fused_ordering(783) 00:10:38.507 fused_ordering(784) 00:10:38.507 fused_ordering(785) 00:10:38.507 fused_ordering(786) 00:10:38.507 fused_ordering(787) 00:10:38.507 fused_ordering(788) 00:10:38.507 fused_ordering(789) 00:10:38.507 fused_ordering(790) 00:10:38.507 fused_ordering(791) 00:10:38.507 fused_ordering(792) 00:10:38.507 fused_ordering(793) 00:10:38.507 fused_ordering(794) 00:10:38.507 fused_ordering(795) 00:10:38.507 fused_ordering(796) 00:10:38.507 fused_ordering(797) 00:10:38.507 fused_ordering(798) 00:10:38.507 fused_ordering(799) 00:10:38.507 fused_ordering(800) 00:10:38.507 fused_ordering(801) 00:10:38.507 fused_ordering(802) 00:10:38.507 fused_ordering(803) 00:10:38.507 fused_ordering(804) 00:10:38.507 fused_ordering(805) 00:10:38.507 fused_ordering(806) 00:10:38.507 fused_ordering(807) 00:10:38.507 fused_ordering(808) 00:10:38.507 fused_ordering(809) 00:10:38.507 fused_ordering(810) 00:10:38.507 fused_ordering(811) 00:10:38.507 fused_ordering(812) 00:10:38.507 fused_ordering(813) 00:10:38.507 fused_ordering(814) 00:10:38.507 fused_ordering(815) 00:10:38.507 fused_ordering(816) 00:10:38.507 fused_ordering(817) 00:10:38.507 fused_ordering(818) 00:10:38.507 fused_ordering(819) 00:10:38.507 fused_ordering(820) 00:10:39.442 fused_o[2024-04-19 04:01:53.714689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef6bd0 is same with the state(5) to be set 00:10:39.442 rdering(821) 00:10:39.442 fused_ordering(822) 00:10:39.442 fused_ordering(823) 00:10:39.442 fused_ordering(824) 00:10:39.442 fused_ordering(825) 00:10:39.442 fused_ordering(826) 00:10:39.442 fused_ordering(827) 00:10:39.442 fused_ordering(828) 00:10:39.442 fused_ordering(829) 00:10:39.442 fused_ordering(830) 00:10:39.442 fused_ordering(831) 00:10:39.442 fused_ordering(832) 00:10:39.442 fused_ordering(833) 00:10:39.442 fused_ordering(834) 00:10:39.442 fused_ordering(835) 00:10:39.442 fused_ordering(836) 00:10:39.442 fused_ordering(837) 00:10:39.442 fused_ordering(838) 00:10:39.442 fused_ordering(839) 00:10:39.442 fused_ordering(840) 00:10:39.442 fused_ordering(841) 00:10:39.442 fused_ordering(842) 00:10:39.442 fused_ordering(843) 00:10:39.442 fused_ordering(844) 00:10:39.442 fused_ordering(845) 00:10:39.442 fused_ordering(846) 00:10:39.442 fused_ordering(847) 00:10:39.442 fused_ordering(848) 00:10:39.442 fused_ordering(849) 00:10:39.442 fused_ordering(850) 00:10:39.442 fused_ordering(851) 00:10:39.442 fused_ordering(852) 00:10:39.442 fused_ordering(853) 00:10:39.442 fused_ordering(854) 00:10:39.442 fused_ordering(855) 00:10:39.443 fused_ordering(856) 00:10:39.443 fused_ordering(857) 00:10:39.443 fused_ordering(858) 00:10:39.443 fused_ordering(859) 00:10:39.443 fused_ordering(860) 00:10:39.443 fused_ordering(861) 00:10:39.443 fused_ordering(862) 00:10:39.443 fused_ordering(863) 00:10:39.443 fused_ordering(864) 00:10:39.443 fused_ordering(865) 00:10:39.443 fused_ordering(866) 00:10:39.443 fused_ordering(867) 00:10:39.443 fused_ordering(868) 00:10:39.443 fused_ordering(869) 00:10:39.443 fused_ordering(870) 00:10:39.443 fused_ordering(871) 00:10:39.443 fused_ordering(872) 00:10:39.443 fused_ordering(873) 00:10:39.443 fused_ordering(874) 00:10:39.443 fused_ordering(875) 00:10:39.443 fused_ordering(876) 00:10:39.443 fused_ordering(877) 00:10:39.443 fused_ordering(878) 00:10:39.443 fused_ordering(879) 00:10:39.443 fused_ordering(880) 00:10:39.443 fused_ordering(881) 00:10:39.443 fused_ordering(882) 00:10:39.443 fused_ordering(883) 00:10:39.443 fused_ordering(884) 00:10:39.443 fused_ordering(885) 00:10:39.443 fused_ordering(886) 00:10:39.443 fused_ordering(887) 00:10:39.443 fused_ordering(888) 00:10:39.443 fused_ordering(889) 00:10:39.443 fused_ordering(890) 00:10:39.443 fused_ordering(891) 00:10:39.443 fused_ordering(892) 00:10:39.443 fused_ordering(893) 00:10:39.443 fused_ordering(894) 00:10:39.443 fused_ordering(895) 00:10:39.443 fused_ordering(896) 00:10:39.443 fused_ordering(897) 00:10:39.443 fused_ordering(898) 00:10:39.443 fused_ordering(899) 00:10:39.443 fused_ordering(900) 00:10:39.443 fused_ordering(901) 00:10:39.443 fused_ordering(902) 00:10:39.443 fused_ordering(903) 00:10:39.443 fused_ordering(904) 00:10:39.443 fused_ordering(905) 00:10:39.443 fused_ordering(906) 00:10:39.443 fused_ordering(907) 00:10:39.443 fused_ordering(908) 00:10:39.443 fused_ordering(909) 00:10:39.443 fused_ordering(910) 00:10:39.443 fused_ordering(911) 00:10:39.443 fused_ordering(912) 00:10:39.443 fused_ordering(913) 00:10:39.443 fused_ordering(914) 00:10:39.443 fused_ordering(915) 00:10:39.443 fused_ordering(916) 00:10:39.443 fused_ordering(917) 00:10:39.443 fused_ordering(918) 00:10:39.443 fused_ordering(919) 00:10:39.443 fused_ordering(920) 00:10:39.443 fused_ordering(921) 00:10:39.443 fused_ordering(922) 00:10:39.443 fused_ordering(923) 00:10:39.443 fused_ordering(924) 00:10:39.443 fused_ordering(925) 00:10:39.443 fused_ordering(926) 00:10:39.443 fused_ordering(927) 00:10:39.443 fused_ordering(928) 00:10:39.443 fused_ordering(929) 00:10:39.443 fused_ordering(930) 00:10:39.443 fused_ordering(931) 00:10:39.443 fused_ordering(932) 00:10:39.443 fused_ordering(933) 00:10:39.443 fused_ordering(934) 00:10:39.443 fused_ordering(935) 00:10:39.443 fused_ordering(936) 00:10:39.443 fused_ordering(937) 00:10:39.443 fused_ordering(938) 00:10:39.443 fused_ordering(939) 00:10:39.443 fused_ordering(940) 00:10:39.443 fused_ordering(941) 00:10:39.443 fused_ordering(942) 00:10:39.443 fused_ordering(943) 00:10:39.443 fused_ordering(944) 00:10:39.443 fused_ordering(945) 00:10:39.443 fused_ordering(946) 00:10:39.443 fused_ordering(947) 00:10:39.443 fused_ordering(948) 00:10:39.443 fused_ordering(949) 00:10:39.443 fused_ordering(950) 00:10:39.443 fused_ordering(951) 00:10:39.443 fused_ordering(952) 00:10:39.443 fused_ordering(953) 00:10:39.443 fused_ordering(954) 00:10:39.443 fused_ordering(955) 00:10:39.443 fused_ordering(956) 00:10:39.443 fused_ordering(957) 00:10:39.443 fused_ordering(958) 00:10:39.443 fused_ordering(959) 00:10:39.443 fused_ordering(960) 00:10:39.443 fused_ordering(961) 00:10:39.443 fused_ordering(962) 00:10:39.443 fused_ordering(963) 00:10:39.443 fused_ordering(964) 00:10:39.443 fused_ordering(965) 00:10:39.443 fused_ordering(966) 00:10:39.443 fused_ordering(967) 00:10:39.443 fused_ordering(968) 00:10:39.443 fused_ordering(969) 00:10:39.443 fused_ordering(970) 00:10:39.443 fused_ordering(971) 00:10:39.443 fused_ordering(972) 00:10:39.443 fused_ordering(973) 00:10:39.443 fused_ordering(974) 00:10:39.443 fused_ordering(975) 00:10:39.443 fused_ordering(976) 00:10:39.443 fused_ordering(977) 00:10:39.443 fused_ordering(978) 00:10:39.443 fused_ordering(979) 00:10:39.443 fused_ordering(980) 00:10:39.443 fused_ordering(981) 00:10:39.443 fused_ordering(982) 00:10:39.443 fused_ordering(983) 00:10:39.443 fused_ordering(984) 00:10:39.443 fused_ordering(985) 00:10:39.443 fused_ordering(986) 00:10:39.443 fused_ordering(987) 00:10:39.443 fused_ordering(988) 00:10:39.443 fused_ordering(989) 00:10:39.443 fused_ordering(990) 00:10:39.443 fused_ordering(991) 00:10:39.443 fused_ordering(992) 00:10:39.443 fused_ordering(993) 00:10:39.443 fused_ordering(994) 00:10:39.443 fused_ordering(995) 00:10:39.443 fused_ordering(996) 00:10:39.443 fused_ordering(997) 00:10:39.443 fused_ordering(998) 00:10:39.443 fused_ordering(999) 00:10:39.443 fused_ordering(1000) 00:10:39.443 fused_ordering(1001) 00:10:39.443 fused_ordering(1002) 00:10:39.443 fused_ordering(1003) 00:10:39.443 fused_ordering(1004) 00:10:39.443 fused_ordering(1005) 00:10:39.443 fused_ordering(1006) 00:10:39.443 fused_ordering(1007) 00:10:39.443 fused_ordering(1008) 00:10:39.443 fused_ordering(1009) 00:10:39.443 fused_ordering(1010) 00:10:39.443 fused_ordering(1011) 00:10:39.443 fused_ordering(1012) 00:10:39.443 fused_ordering(1013) 00:10:39.443 fused_ordering(1014) 00:10:39.443 fused_ordering(1015) 00:10:39.443 fused_ordering(1016) 00:10:39.443 fused_ordering(1017) 00:10:39.443 fused_ordering(1018) 00:10:39.443 fused_ordering(1019) 00:10:39.443 fused_ordering(1020) 00:10:39.443 fused_ordering(1021) 00:10:39.443 fused_ordering(1022) 00:10:39.443 fused_ordering(1023) 00:10:39.443 04:01:53 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:39.443 04:01:53 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:39.443 04:01:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:39.443 04:01:53 -- nvmf/common.sh@117 -- # sync 00:10:39.443 04:01:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.443 04:01:53 -- nvmf/common.sh@120 -- # set +e 00:10:39.443 04:01:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.443 04:01:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.443 rmmod nvme_tcp 00:10:39.443 rmmod nvme_fabrics 00:10:39.443 rmmod nvme_keyring 00:10:39.443 04:01:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.443 04:01:53 -- nvmf/common.sh@124 -- # set -e 00:10:39.443 04:01:53 -- nvmf/common.sh@125 -- # return 0 00:10:39.443 04:01:53 -- nvmf/common.sh@478 -- # '[' -n 3730207 ']' 00:10:39.443 04:01:53 -- nvmf/common.sh@479 -- # killprocess 3730207 00:10:39.443 04:01:53 -- common/autotest_common.sh@936 -- # '[' -z 3730207 ']' 00:10:39.443 04:01:53 -- common/autotest_common.sh@940 -- # kill -0 3730207 00:10:39.443 04:01:53 -- common/autotest_common.sh@941 -- # uname 00:10:39.443 04:01:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:39.443 04:01:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3730207 00:10:39.443 04:01:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:39.443 04:01:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:39.443 04:01:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3730207' 00:10:39.443 killing process with pid 3730207 00:10:39.443 04:01:53 -- common/autotest_common.sh@955 -- # kill 3730207 00:10:39.443 04:01:53 -- common/autotest_common.sh@960 -- # wait 3730207 00:10:39.701 04:01:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:39.701 04:01:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:39.701 04:01:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:39.701 04:01:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.701 04:01:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.701 04:01:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.701 04:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.701 04:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.607 04:01:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.607 00:10:41.607 real 0m11.265s 00:10:41.607 user 0m6.188s 00:10:41.607 sys 0m6.026s 00:10:41.607 04:01:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.866 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.866 ************************************ 00:10:41.866 END TEST nvmf_fused_ordering 00:10:41.866 ************************************ 00:10:41.866 04:01:56 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:41.866 04:01:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:41.866 04:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.866 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:41.866 ************************************ 00:10:41.866 START TEST nvmf_delete_subsystem 00:10:41.866 ************************************ 00:10:41.866 04:01:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:41.866 * Looking for test storage... 00:10:42.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.125 04:01:56 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.125 04:01:56 -- nvmf/common.sh@7 -- # uname -s 00:10:42.125 04:01:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.125 04:01:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.125 04:01:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.125 04:01:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.125 04:01:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.125 04:01:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.125 04:01:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.125 04:01:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.125 04:01:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.125 04:01:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.125 04:01:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:42.125 04:01:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:42.125 04:01:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.125 04:01:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.125 04:01:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.125 04:01:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.125 04:01:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.125 04:01:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.125 04:01:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.125 04:01:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.125 04:01:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.125 04:01:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.125 04:01:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.125 04:01:56 -- paths/export.sh@5 -- # export PATH 00:10:42.125 04:01:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.125 04:01:56 -- nvmf/common.sh@47 -- # : 0 00:10:42.125 04:01:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.125 04:01:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.125 04:01:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.125 04:01:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.125 04:01:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.125 04:01:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.125 04:01:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.125 04:01:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.125 04:01:56 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:42.125 04:01:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:42.125 04:01:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.125 04:01:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:42.125 04:01:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:42.125 04:01:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:42.125 04:01:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.125 04:01:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.125 04:01:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.125 04:01:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:42.125 04:01:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:42.125 04:01:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.125 04:01:56 -- common/autotest_common.sh@10 -- # set +x 00:10:47.400 04:02:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:47.400 04:02:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.400 04:02:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.400 04:02:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.400 04:02:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.400 04:02:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.400 04:02:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.400 04:02:01 -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.400 04:02:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.400 04:02:01 -- nvmf/common.sh@296 -- # e810=() 00:10:47.400 04:02:01 -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.400 04:02:01 -- nvmf/common.sh@297 -- # x722=() 00:10:47.400 04:02:01 -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.400 04:02:01 -- nvmf/common.sh@298 -- # mlx=() 00:10:47.400 04:02:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.400 04:02:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.400 04:02:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.400 04:02:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.400 04:02:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.400 04:02:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.659 04:02:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.659 04:02:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.659 04:02:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.659 04:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:47.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:47.659 04:02:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.659 04:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:47.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:47.659 04:02:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.659 04:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.659 04:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.659 04:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:47.659 Found net devices under 0000:af:00.0: cvl_0_0 00:10:47.659 04:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.659 04:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.659 04:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.659 04:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.659 04:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:47.659 Found net devices under 0000:af:00.1: cvl_0_1 00:10:47.659 04:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.659 04:02:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:47.659 04:02:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:47.659 04:02:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:47.659 04:02:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.659 04:02:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.659 04:02:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.659 04:02:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.659 04:02:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.659 04:02:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.659 04:02:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.659 04:02:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.659 04:02:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.659 04:02:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.659 04:02:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.659 04:02:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.659 04:02:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.659 04:02:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.659 04:02:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.659 04:02:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.659 04:02:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.659 04:02:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.659 04:02:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.917 04:02:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:10:47.917 00:10:47.917 --- 10.0.0.2 ping statistics --- 00:10:47.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.917 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:47.917 04:02:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:10:47.917 00:10:47.917 --- 10.0.0.1 ping statistics --- 00:10:47.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.917 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:47.917 04:02:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.917 04:02:02 -- nvmf/common.sh@411 -- # return 0 00:10:47.917 04:02:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:47.917 04:02:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.917 04:02:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:47.917 04:02:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:47.917 04:02:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.917 04:02:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:47.917 04:02:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:47.917 04:02:02 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:47.917 04:02:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:47.917 04:02:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:47.917 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:47.917 04:02:02 -- nvmf/common.sh@470 -- # nvmfpid=3734400 00:10:47.917 04:02:02 -- nvmf/common.sh@471 -- # waitforlisten 3734400 00:10:47.917 04:02:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:47.917 04:02:02 -- common/autotest_common.sh@817 -- # '[' -z 3734400 ']' 00:10:47.917 04:02:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.917 04:02:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:47.917 04:02:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.918 04:02:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:47.918 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:47.918 [2024-04-19 04:02:02.298722] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:10:47.918 [2024-04-19 04:02:02.298777] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.918 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.918 [2024-04-19 04:02:02.384044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.177 [2024-04-19 04:02:02.470504] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.177 [2024-04-19 04:02:02.470548] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.177 [2024-04-19 04:02:02.470558] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.177 [2024-04-19 04:02:02.470567] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.177 [2024-04-19 04:02:02.470575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.177 [2024-04-19 04:02:02.470630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.177 [2024-04-19 04:02:02.470636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.177 04:02:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:48.177 04:02:02 -- common/autotest_common.sh@850 -- # return 0 00:10:48.177 04:02:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:48.177 04:02:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 04:02:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 [2024-04-19 04:02:02.607860] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 [2024-04-19 04:02:02.628090] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 NULL1 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 Delay0 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.177 04:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.177 04:02:02 -- common/autotest_common.sh@10 -- # set +x 00:10:48.177 04:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@28 -- # perf_pid=3734633 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:48.177 04:02:02 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:48.177 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.436 [2024-04-19 04:02:02.710176] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:50.340 04:02:04 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.340 04:02:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.340 04:02:04 -- common/autotest_common.sh@10 -- # set +x 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 [2024-04-19 04:02:04.880990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ecd0 is same with the state(5) to be set 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 starting I/O failed: -6 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 [2024-04-19 04:02:04.884696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f165400c250 is same with the state(5) to be set 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Write completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.599 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Read completed with error (sct=0, sc=8) 00:10:50.600 Write completed with error (sct=0, sc=8) 00:10:51.544 [2024-04-19 04:02:05.848207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a24be0 is same with the state(5) to be set 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 [2024-04-19 04:02:05.886215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0eb40 is same with the state(5) to be set 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 [2024-04-19 04:02:05.886545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0e780 is same with the state(5) to be set 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 [2024-04-19 04:02:05.886667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0e910 is same with the state(5) to be set 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Read completed with error (sct=0, sc=8) 00:10:51.544 Write completed with error (sct=0, sc=8) 00:10:51.544 [2024-04-19 04:02:05.887171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f165400c510 is same with the state(5) to be set 00:10:51.544 [2024-04-19 04:02:05.887841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a24be0 (9): Bad file descriptor 00:10:51.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:51.545 04:02:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.545 04:02:05 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:51.545 04:02:05 -- target/delete_subsystem.sh@35 -- # kill -0 3734633 00:10:51.545 04:02:05 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:51.545 Initializing NVMe Controllers 00:10:51.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.545 Controller IO queue size 128, less than required. 00:10:51.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:51.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:51.545 Initialization complete. Launching workers. 00:10:51.545 ======================================================== 00:10:51.545 Latency(us) 00:10:51.545 Device Information : IOPS MiB/s Average min max 00:10:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.68 0.09 952359.17 629.75 1013027.02 00:10:51.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.88 0.08 868465.03 334.25 1014737.97 00:10:51.545 ======================================================== 00:10:51.545 Total : 344.56 0.17 913917.76 334.25 1014737.97 00:10:51.545 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@35 -- # kill -0 3734633 00:10:52.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3734633) - No such process 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@45 -- # NOT wait 3734633 00:10:52.113 04:02:06 -- common/autotest_common.sh@638 -- # local es=0 00:10:52.113 04:02:06 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3734633 00:10:52.113 04:02:06 -- common/autotest_common.sh@626 -- # local arg=wait 00:10:52.113 04:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:52.113 04:02:06 -- common/autotest_common.sh@630 -- # type -t wait 00:10:52.113 04:02:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:52.113 04:02:06 -- common/autotest_common.sh@641 -- # wait 3734633 00:10:52.113 04:02:06 -- common/autotest_common.sh@641 -- # es=1 00:10:52.113 04:02:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:52.113 04:02:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:52.113 04:02:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.113 04:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.113 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.113 04:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.113 04:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.113 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.113 [2024-04-19 04:02:06.414160] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.113 04:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.113 04:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.113 04:02:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.113 04:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@54 -- # perf_pid=3735208 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@56 -- # delay=0 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:52.113 04:02:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.113 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.113 [2024-04-19 04:02:06.473557] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:52.681 04:02:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.681 04:02:06 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:52.681 04:02:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.940 04:02:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:52.941 04:02:07 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:52.941 04:02:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.506 04:02:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.506 04:02:07 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:53.506 04:02:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.074 04:02:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.074 04:02:08 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:54.074 04:02:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.643 04:02:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.643 04:02:08 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:54.643 04:02:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.212 04:02:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.212 04:02:09 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:55.212 04:02:09 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.212 Initializing NVMe Controllers 00:10:55.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.212 Controller IO queue size 128, less than required. 00:10:55.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:55.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:55.212 Initialization complete. Launching workers. 00:10:55.212 ======================================================== 00:10:55.212 Latency(us) 00:10:55.212 Device Information : IOPS MiB/s Average min max 00:10:55.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003472.08 1000179.96 1010714.55 00:10:55.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006068.03 1000377.99 1015353.44 00:10:55.213 ======================================================== 00:10:55.213 Total : 256.00 0.12 1004770.05 1000179.96 1015353.44 00:10:55.213 00:10:55.473 04:02:09 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.473 04:02:09 -- target/delete_subsystem.sh@57 -- # kill -0 3735208 00:10:55.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3735208) - No such process 00:10:55.473 04:02:09 -- target/delete_subsystem.sh@67 -- # wait 3735208 00:10:55.473 04:02:09 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:55.473 04:02:09 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:55.473 04:02:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:55.473 04:02:09 -- nvmf/common.sh@117 -- # sync 00:10:55.473 04:02:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.473 04:02:09 -- nvmf/common.sh@120 -- # set +e 00:10:55.473 04:02:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.473 04:02:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.473 rmmod nvme_tcp 00:10:55.473 rmmod nvme_fabrics 00:10:55.473 rmmod nvme_keyring 00:10:55.771 04:02:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.771 04:02:10 -- nvmf/common.sh@124 -- # set -e 00:10:55.771 04:02:10 -- nvmf/common.sh@125 -- # return 0 00:10:55.771 04:02:10 -- nvmf/common.sh@478 -- # '[' -n 3734400 ']' 00:10:55.771 04:02:10 -- nvmf/common.sh@479 -- # killprocess 3734400 00:10:55.771 04:02:10 -- common/autotest_common.sh@936 -- # '[' -z 3734400 ']' 00:10:55.771 04:02:10 -- common/autotest_common.sh@940 -- # kill -0 3734400 00:10:55.771 04:02:10 -- common/autotest_common.sh@941 -- # uname 00:10:55.771 04:02:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:55.771 04:02:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3734400 00:10:55.771 04:02:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:55.771 04:02:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:55.771 04:02:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3734400' 00:10:55.771 killing process with pid 3734400 00:10:55.771 04:02:10 -- common/autotest_common.sh@955 -- # kill 3734400 00:10:55.771 04:02:10 -- common/autotest_common.sh@960 -- # wait 3734400 00:10:56.029 04:02:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:56.029 04:02:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:56.029 04:02:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:56.029 04:02:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.029 04:02:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.029 04:02:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.029 04:02:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.029 04:02:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.936 04:02:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.936 00:10:57.936 real 0m16.058s 00:10:57.936 user 0m29.344s 00:10:57.936 sys 0m5.267s 00:10:57.936 04:02:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:57.936 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:10:57.936 ************************************ 00:10:57.936 END TEST nvmf_delete_subsystem 00:10:57.936 ************************************ 00:10:57.936 04:02:12 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:57.936 04:02:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:57.936 04:02:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.936 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:10:58.195 ************************************ 00:10:58.195 START TEST nvmf_ns_masking 00:10:58.195 ************************************ 00:10:58.195 04:02:12 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:58.195 * Looking for test storage... 00:10:58.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.195 04:02:12 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.195 04:02:12 -- nvmf/common.sh@7 -- # uname -s 00:10:58.195 04:02:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.195 04:02:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.195 04:02:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.195 04:02:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.195 04:02:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.195 04:02:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.195 04:02:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.195 04:02:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.195 04:02:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.195 04:02:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.195 04:02:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:58.195 04:02:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:58.195 04:02:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.195 04:02:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.195 04:02:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.195 04:02:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.195 04:02:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.195 04:02:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.195 04:02:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.195 04:02:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.195 04:02:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.195 04:02:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.195 04:02:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.195 04:02:12 -- paths/export.sh@5 -- # export PATH 00:10:58.196 04:02:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.196 04:02:12 -- nvmf/common.sh@47 -- # : 0 00:10:58.196 04:02:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.196 04:02:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.196 04:02:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.196 04:02:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.196 04:02:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.196 04:02:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.196 04:02:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.196 04:02:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.196 04:02:12 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.196 04:02:12 -- target/ns_masking.sh@11 -- # loops=5 00:10:58.196 04:02:12 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:58.196 04:02:12 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:58.196 04:02:12 -- target/ns_masking.sh@15 -- # uuidgen 00:10:58.196 04:02:12 -- target/ns_masking.sh@15 -- # HOSTID=a5f52acf-4d94-403e-a80d-8f4227c3a50d 00:10:58.196 04:02:12 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:58.196 04:02:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:58.196 04:02:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.196 04:02:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:58.196 04:02:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:58.196 04:02:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:58.196 04:02:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.196 04:02:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.196 04:02:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.196 04:02:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:58.196 04:02:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:58.196 04:02:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.196 04:02:12 -- common/autotest_common.sh@10 -- # set +x 00:11:04.771 04:02:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:04.771 04:02:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.771 04:02:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.771 04:02:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.771 04:02:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.771 04:02:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.771 04:02:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.771 04:02:18 -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.771 04:02:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.771 04:02:18 -- nvmf/common.sh@296 -- # e810=() 00:11:04.771 04:02:18 -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.771 04:02:18 -- nvmf/common.sh@297 -- # x722=() 00:11:04.771 04:02:18 -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.771 04:02:18 -- nvmf/common.sh@298 -- # mlx=() 00:11:04.771 04:02:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.771 04:02:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.771 04:02:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.771 04:02:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:04.771 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:04.771 04:02:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.771 04:02:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:04.771 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:04.771 04:02:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.771 04:02:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.771 04:02:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.771 04:02:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:04.771 Found net devices under 0000:af:00.0: cvl_0_0 00:11:04.771 04:02:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.771 04:02:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.771 04:02:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.771 04:02:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:04.771 Found net devices under 0000:af:00.1: cvl_0_1 00:11:04.771 04:02:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:04.771 04:02:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:04.771 04:02:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.771 04:02:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.771 04:02:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.771 04:02:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.771 04:02:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.771 04:02:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.771 04:02:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.771 04:02:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.771 04:02:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.771 04:02:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.771 04:02:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.771 04:02:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.771 04:02:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.771 04:02:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.771 04:02:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.771 04:02:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.771 04:02:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.771 04:02:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.771 04:02:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:11:04.771 00:11:04.771 --- 10.0.0.2 ping statistics --- 00:11:04.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.771 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:04.771 04:02:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:11:04.771 00:11:04.771 --- 10.0.0.1 ping statistics --- 00:11:04.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.771 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:04.771 04:02:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.771 04:02:18 -- nvmf/common.sh@411 -- # return 0 00:11:04.771 04:02:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:04.771 04:02:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.771 04:02:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:04.771 04:02:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.772 04:02:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:04.772 04:02:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:04.772 04:02:18 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:04.772 04:02:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:04.772 04:02:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:04.772 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.772 04:02:18 -- nvmf/common.sh@470 -- # nvmfpid=3739484 00:11:04.772 04:02:18 -- nvmf/common.sh@471 -- # waitforlisten 3739484 00:11:04.772 04:02:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.772 04:02:18 -- common/autotest_common.sh@817 -- # '[' -z 3739484 ']' 00:11:04.772 04:02:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.772 04:02:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:04.772 04:02:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.772 04:02:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:04.772 04:02:18 -- common/autotest_common.sh@10 -- # set +x 00:11:04.772 [2024-04-19 04:02:18.399320] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:04.772 [2024-04-19 04:02:18.399400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.772 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.772 [2024-04-19 04:02:18.486348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.772 [2024-04-19 04:02:18.577148] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.772 [2024-04-19 04:02:18.577192] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.772 [2024-04-19 04:02:18.577202] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.772 [2024-04-19 04:02:18.577211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.772 [2024-04-19 04:02:18.577219] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.772 [2024-04-19 04:02:18.577266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.772 [2024-04-19 04:02:18.577284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.772 [2024-04-19 04:02:18.577401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.772 [2024-04-19 04:02:18.577402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.031 04:02:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:05.031 04:02:19 -- common/autotest_common.sh@850 -- # return 0 00:11:05.031 04:02:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:05.031 04:02:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:05.031 04:02:19 -- common/autotest_common.sh@10 -- # set +x 00:11:05.031 04:02:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.031 04:02:19 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.290 [2024-04-19 04:02:19.600607] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.290 04:02:19 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:05.290 04:02:19 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:05.290 04:02:19 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:05.550 Malloc1 00:11:05.550 04:02:19 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:05.808 Malloc2 00:11:05.808 04:02:20 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.067 04:02:20 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:06.326 04:02:20 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.584 [2024-04-19 04:02:20.880053] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.584 04:02:20 -- target/ns_masking.sh@61 -- # connect 00:11:06.584 04:02:20 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f52acf-4d94-403e-a80d-8f4227c3a50d -a 10.0.0.2 -s 4420 -i 4 00:11:06.584 04:02:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.584 04:02:21 -- common/autotest_common.sh@1184 -- # local i=0 00:11:06.584 04:02:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.584 04:02:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:06.584 04:02:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:09.119 04:02:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:09.119 04:02:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:09.119 04:02:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.119 04:02:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:09.119 04:02:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.119 04:02:23 -- common/autotest_common.sh@1194 -- # return 0 00:11:09.119 04:02:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:09.119 04:02:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:09.119 04:02:23 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:09.119 04:02:23 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:09.119 04:02:23 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:09.119 [ 0]:0x1 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nguid=03e2bfadce964bf5a7223094b8dc7428 00:11:09.119 04:02:23 -- target/ns_masking.sh@41 -- # [[ 03e2bfadce964bf5a7223094b8dc7428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.119 04:02:23 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:09.119 04:02:23 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:09.119 [ 0]:0x1 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nguid=03e2bfadce964bf5a7223094b8dc7428 00:11:09.119 04:02:23 -- target/ns_masking.sh@41 -- # [[ 03e2bfadce964bf5a7223094b8dc7428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.119 04:02:23 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:09.119 04:02:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:09.119 [ 1]:0x2 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:09.119 04:02:23 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:09.119 04:02:23 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.119 04:02:23 -- target/ns_masking.sh@69 -- # disconnect 00:11:09.119 04:02:23 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.119 04:02:23 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.378 04:02:23 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:09.636 04:02:24 -- target/ns_masking.sh@77 -- # connect 1 00:11:09.636 04:02:24 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f52acf-4d94-403e-a80d-8f4227c3a50d -a 10.0.0.2 -s 4420 -i 4 00:11:09.894 04:02:24 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:09.894 04:02:24 -- common/autotest_common.sh@1184 -- # local i=0 00:11:09.894 04:02:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.894 04:02:24 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:09.894 04:02:24 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:09.894 04:02:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:12.427 04:02:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:12.427 04:02:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:12.427 04:02:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.427 04:02:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:12.427 04:02:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.427 04:02:26 -- common/autotest_common.sh@1194 -- # return 0 00:11:12.427 04:02:26 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:12.427 04:02:26 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.427 04:02:26 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:12.427 04:02:26 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:12.427 04:02:26 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:12.427 04:02:26 -- common/autotest_common.sh@638 -- # local es=0 00:11:12.427 04:02:26 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.427 04:02:26 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:12.427 04:02:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:12.427 04:02:26 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:12.427 04:02:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:12.427 04:02:26 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:12.427 04:02:26 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.427 04:02:26 -- common/autotest_common.sh@641 -- # es=1 00:11:12.427 04:02:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:12.427 04:02:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:12.427 04:02:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:12.427 04:02:26 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:12.427 [ 0]:0x2 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:12.427 04:02:26 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.427 04:02:26 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.427 04:02:26 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:12.427 [ 0]:0x1 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nguid=03e2bfadce964bf5a7223094b8dc7428 00:11:12.427 04:02:26 -- target/ns_masking.sh@41 -- # [[ 03e2bfadce964bf5a7223094b8dc7428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.427 04:02:26 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.427 04:02:26 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:12.427 [ 1]:0x2 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.427 04:02:26 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.687 04:02:26 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:12.687 04:02:26 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.687 04:02:26 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.687 04:02:27 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:12.687 04:02:27 -- common/autotest_common.sh@638 -- # local es=0 00:11:12.687 04:02:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.687 04:02:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:12.687 04:02:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:12.687 04:02:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:12.687 04:02:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:12.687 04:02:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:12.687 04:02:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:12.687 04:02:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.687 04:02:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.687 04:02:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.946 04:02:27 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:12.946 04:02:27 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.946 04:02:27 -- common/autotest_common.sh@641 -- # es=1 00:11:12.946 04:02:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:12.946 04:02:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:12.946 04:02:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:12.946 04:02:27 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:12.946 04:02:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:12.946 04:02:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:12.946 [ 0]:0x2 00:11:12.946 04:02:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.946 04:02:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:12.946 04:02:27 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:12.946 04:02:27 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.946 04:02:27 -- target/ns_masking.sh@91 -- # disconnect 00:11:12.947 04:02:27 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.947 04:02:27 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:13.206 04:02:27 -- target/ns_masking.sh@95 -- # connect 2 00:11:13.206 04:02:27 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5f52acf-4d94-403e-a80d-8f4227c3a50d -a 10.0.0.2 -s 4420 -i 4 00:11:13.465 04:02:27 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:13.465 04:02:27 -- common/autotest_common.sh@1184 -- # local i=0 00:11:13.465 04:02:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.465 04:02:27 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:13.465 04:02:27 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:13.465 04:02:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:15.370 04:02:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:15.370 04:02:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:15.370 04:02:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.370 04:02:29 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:15.370 04:02:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.370 04:02:29 -- common/autotest_common.sh@1194 -- # return 0 00:11:15.370 04:02:29 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:15.370 04:02:29 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:15.370 04:02:29 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:15.370 04:02:29 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:15.370 04:02:29 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:15.370 04:02:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.370 04:02:29 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:15.370 [ 0]:0x1 00:11:15.370 04:02:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.370 04:02:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.629 04:02:29 -- target/ns_masking.sh@40 -- # nguid=03e2bfadce964bf5a7223094b8dc7428 00:11:15.629 04:02:29 -- target/ns_masking.sh@41 -- # [[ 03e2bfadce964bf5a7223094b8dc7428 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.629 04:02:29 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:15.629 04:02:29 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.629 04:02:29 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:15.629 [ 1]:0x2 00:11:15.629 04:02:29 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.629 04:02:29 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.629 04:02:29 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:15.629 04:02:29 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.629 04:02:29 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:15.888 04:02:30 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:15.888 04:02:30 -- common/autotest_common.sh@638 -- # local es=0 00:11:15.888 04:02:30 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:15.888 04:02:30 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:15.888 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.888 04:02:30 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:15.888 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.888 04:02:30 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:15.889 04:02:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.889 04:02:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:15.889 04:02:30 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.889 04:02:30 -- common/autotest_common.sh@641 -- # es=1 00:11:15.889 04:02:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:15.889 04:02:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:15.889 04:02:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:15.889 04:02:30 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:15.889 04:02:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:15.889 04:02:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:15.889 [ 0]:0x2 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:15.889 04:02:30 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:15.889 04:02:30 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:15.889 04:02:30 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:15.889 04:02:30 -- common/autotest_common.sh@638 -- # local es=0 00:11:15.889 04:02:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:15.889 04:02:30 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.889 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.889 04:02:30 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.889 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.889 04:02:30 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.889 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.889 04:02:30 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.889 04:02:30 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:15.889 04:02:30 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:16.148 [2024-04-19 04:02:30.574021] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:16.148 request: 00:11:16.148 { 00:11:16.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.148 "nsid": 2, 00:11:16.148 "host": "nqn.2016-06.io.spdk:host1", 00:11:16.148 "method": "nvmf_ns_remove_host", 00:11:16.148 "req_id": 1 00:11:16.148 } 00:11:16.148 Got JSON-RPC error response 00:11:16.148 response: 00:11:16.148 { 00:11:16.148 "code": -32602, 00:11:16.148 "message": "Invalid parameters" 00:11:16.148 } 00:11:16.148 04:02:30 -- common/autotest_common.sh@641 -- # es=1 00:11:16.148 04:02:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:16.148 04:02:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:16.148 04:02:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:16.148 04:02:30 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:16.148 04:02:30 -- common/autotest_common.sh@638 -- # local es=0 00:11:16.148 04:02:30 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.148 04:02:30 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:16.148 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:16.148 04:02:30 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:16.148 04:02:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:16.148 04:02:30 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:16.148 04:02:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:16.148 04:02:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:16.148 04:02:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.148 04:02:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:16.148 04:02:30 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:16.148 04:02:30 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.148 04:02:30 -- common/autotest_common.sh@641 -- # es=1 00:11:16.148 04:02:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:16.148 04:02:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:16.148 04:02:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:16.148 04:02:30 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:16.148 04:02:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:16.148 04:02:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:16.148 [ 0]:0x2 00:11:16.148 04:02:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.148 04:02:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:16.406 04:02:30 -- target/ns_masking.sh@40 -- # nguid=b0d590383f6e45dcb4d45035925204eb 00:11:16.406 04:02:30 -- target/ns_masking.sh@41 -- # [[ b0d590383f6e45dcb4d45035925204eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.406 04:02:30 -- target/ns_masking.sh@108 -- # disconnect 00:11:16.406 04:02:30 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.406 04:02:30 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.665 04:02:30 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:16.665 04:02:30 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:16.665 04:02:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:16.665 04:02:30 -- nvmf/common.sh@117 -- # sync 00:11:16.665 04:02:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.665 04:02:31 -- nvmf/common.sh@120 -- # set +e 00:11:16.665 04:02:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.665 04:02:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.665 rmmod nvme_tcp 00:11:16.665 rmmod nvme_fabrics 00:11:16.665 rmmod nvme_keyring 00:11:16.665 04:02:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.665 04:02:31 -- nvmf/common.sh@124 -- # set -e 00:11:16.665 04:02:31 -- nvmf/common.sh@125 -- # return 0 00:11:16.665 04:02:31 -- nvmf/common.sh@478 -- # '[' -n 3739484 ']' 00:11:16.665 04:02:31 -- nvmf/common.sh@479 -- # killprocess 3739484 00:11:16.665 04:02:31 -- common/autotest_common.sh@936 -- # '[' -z 3739484 ']' 00:11:16.665 04:02:31 -- common/autotest_common.sh@940 -- # kill -0 3739484 00:11:16.665 04:02:31 -- common/autotest_common.sh@941 -- # uname 00:11:16.665 04:02:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.665 04:02:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3739484 00:11:16.665 04:02:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.665 04:02:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.665 04:02:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3739484' 00:11:16.665 killing process with pid 3739484 00:11:16.665 04:02:31 -- common/autotest_common.sh@955 -- # kill 3739484 00:11:16.665 04:02:31 -- common/autotest_common.sh@960 -- # wait 3739484 00:11:16.924 04:02:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:16.924 04:02:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:16.924 04:02:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:16.924 04:02:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.924 04:02:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.924 04:02:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.924 04:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.924 04:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.475 04:02:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:19.475 00:11:19.475 real 0m20.897s 00:11:19.475 user 0m56.254s 00:11:19.475 sys 0m5.980s 00:11:19.475 04:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.475 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.475 ************************************ 00:11:19.475 END TEST nvmf_ns_masking 00:11:19.475 ************************************ 00:11:19.475 04:02:33 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:19.475 04:02:33 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:19.476 04:02:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:19.476 04:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.476 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:11:19.476 ************************************ 00:11:19.476 START TEST nvmf_nvme_cli 00:11:19.476 ************************************ 00:11:19.476 04:02:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:19.476 * Looking for test storage... 00:11:19.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.476 04:02:33 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.476 04:02:33 -- nvmf/common.sh@7 -- # uname -s 00:11:19.476 04:02:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.476 04:02:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.476 04:02:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.476 04:02:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.476 04:02:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.476 04:02:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.476 04:02:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.476 04:02:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.476 04:02:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.476 04:02:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.476 04:02:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:19.476 04:02:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:19.476 04:02:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.476 04:02:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.476 04:02:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.476 04:02:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.476 04:02:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.476 04:02:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.476 04:02:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.476 04:02:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.476 04:02:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.476 04:02:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.476 04:02:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.476 04:02:33 -- paths/export.sh@5 -- # export PATH 00:11:19.476 04:02:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.476 04:02:33 -- nvmf/common.sh@47 -- # : 0 00:11:19.476 04:02:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.476 04:02:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.476 04:02:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.476 04:02:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.476 04:02:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.476 04:02:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.476 04:02:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.476 04:02:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.476 04:02:33 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.476 04:02:33 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.476 04:02:33 -- target/nvme_cli.sh@14 -- # devs=() 00:11:19.476 04:02:33 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:19.476 04:02:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:19.476 04:02:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.476 04:02:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:19.476 04:02:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:19.476 04:02:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:19.476 04:02:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.476 04:02:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.476 04:02:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.476 04:02:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:19.476 04:02:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:19.476 04:02:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:19.476 04:02:33 -- common/autotest_common.sh@10 -- # set +x 00:11:24.758 04:02:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:24.758 04:02:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.758 04:02:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.758 04:02:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.758 04:02:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.758 04:02:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.758 04:02:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.759 04:02:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.759 04:02:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.759 04:02:39 -- nvmf/common.sh@296 -- # e810=() 00:11:24.759 04:02:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.759 04:02:39 -- nvmf/common.sh@297 -- # x722=() 00:11:24.759 04:02:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.759 04:02:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:24.759 04:02:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.759 04:02:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.759 04:02:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.759 04:02:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.759 04:02:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.759 04:02:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:24.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:24.759 04:02:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.759 04:02:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:24.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:24.759 04:02:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.759 04:02:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.759 04:02:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.759 04:02:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:24.759 Found net devices under 0000:af:00.0: cvl_0_0 00:11:24.759 04:02:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.759 04:02:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.759 04:02:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.759 04:02:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.759 04:02:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:24.759 Found net devices under 0000:af:00.1: cvl_0_1 00:11:24.759 04:02:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.759 04:02:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:24.759 04:02:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:24.759 04:02:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:24.759 04:02:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.759 04:02:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.759 04:02:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.759 04:02:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.759 04:02:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.759 04:02:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.759 04:02:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.759 04:02:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.759 04:02:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.759 04:02:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.759 04:02:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.759 04:02:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.759 04:02:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.759 04:02:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.759 04:02:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.018 04:02:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.018 04:02:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.018 04:02:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.018 04:02:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.018 04:02:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:11:25.018 00:11:25.018 --- 10.0.0.2 ping statistics --- 00:11:25.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.018 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:25.018 04:02:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:11:25.018 00:11:25.018 --- 10.0.0.1 ping statistics --- 00:11:25.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.018 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:25.018 04:02:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.018 04:02:39 -- nvmf/common.sh@411 -- # return 0 00:11:25.018 04:02:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:25.018 04:02:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.018 04:02:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:25.018 04:02:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:25.018 04:02:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.018 04:02:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:25.018 04:02:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:25.018 04:02:39 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:25.018 04:02:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:25.018 04:02:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:25.018 04:02:39 -- common/autotest_common.sh@10 -- # set +x 00:11:25.018 04:02:39 -- nvmf/common.sh@470 -- # nvmfpid=3745708 00:11:25.018 04:02:39 -- nvmf/common.sh@471 -- # waitforlisten 3745708 00:11:25.018 04:02:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.018 04:02:39 -- common/autotest_common.sh@817 -- # '[' -z 3745708 ']' 00:11:25.018 04:02:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.018 04:02:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.018 04:02:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.018 04:02:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.018 04:02:39 -- common/autotest_common.sh@10 -- # set +x 00:11:25.018 [2024-04-19 04:02:39.508879] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:25.018 [2024-04-19 04:02:39.508937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.276 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.276 [2024-04-19 04:02:39.595668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.276 [2024-04-19 04:02:39.684818] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.276 [2024-04-19 04:02:39.684861] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.276 [2024-04-19 04:02:39.684871] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.276 [2024-04-19 04:02:39.684879] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.276 [2024-04-19 04:02:39.684886] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.276 [2024-04-19 04:02:39.684937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.276 [2024-04-19 04:02:39.685029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.276 [2024-04-19 04:02:39.685153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.276 [2024-04-19 04:02:39.685153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.208 04:02:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:26.208 04:02:40 -- common/autotest_common.sh@850 -- # return 0 00:11:26.208 04:02:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:26.208 04:02:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 04:02:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.208 04:02:40 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 [2024-04-19 04:02:40.496288] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 Malloc0 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 Malloc1 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 [2024-04-19 04:02:40.582683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:26.208 04:02:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.208 04:02:40 -- common/autotest_common.sh@10 -- # set +x 00:11:26.208 04:02:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.208 04:02:40 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:26.208 00:11:26.208 Discovery Log Number of Records 2, Generation counter 2 00:11:26.208 =====Discovery Log Entry 0====== 00:11:26.208 trtype: tcp 00:11:26.208 adrfam: ipv4 00:11:26.208 subtype: current discovery subsystem 00:11:26.208 treq: not required 00:11:26.208 portid: 0 00:11:26.208 trsvcid: 4420 00:11:26.208 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:26.208 traddr: 10.0.0.2 00:11:26.208 eflags: explicit discovery connections, duplicate discovery information 00:11:26.208 sectype: none 00:11:26.208 =====Discovery Log Entry 1====== 00:11:26.209 trtype: tcp 00:11:26.209 adrfam: ipv4 00:11:26.209 subtype: nvme subsystem 00:11:26.209 treq: not required 00:11:26.209 portid: 0 00:11:26.209 trsvcid: 4420 00:11:26.209 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:26.209 traddr: 10.0.0.2 00:11:26.209 eflags: none 00:11:26.209 sectype: none 00:11:26.209 04:02:40 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:26.209 04:02:40 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:26.209 04:02:40 -- nvmf/common.sh@511 -- # local dev _ 00:11:26.209 04:02:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:26.209 04:02:40 -- nvmf/common.sh@510 -- # nvme list 00:11:26.209 04:02:40 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:26.209 04:02:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:26.209 04:02:40 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:26.209 04:02:40 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:26.209 04:02:40 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:26.209 04:02:40 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.583 04:02:42 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:27.583 04:02:42 -- common/autotest_common.sh@1184 -- # local i=0 00:11:27.583 04:02:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.583 04:02:42 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:27.583 04:02:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:27.583 04:02:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:30.115 04:02:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:30.115 04:02:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:30.115 04:02:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.115 04:02:44 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:30.115 04:02:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.115 04:02:44 -- common/autotest_common.sh@1194 -- # return 0 00:11:30.115 04:02:44 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:30.115 04:02:44 -- nvmf/common.sh@511 -- # local dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@510 -- # nvme list 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:30.115 /dev/nvme0n1 ]] 00:11:30.115 04:02:44 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:30.115 04:02:44 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:30.115 04:02:44 -- nvmf/common.sh@511 -- # local dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@510 -- # nvme list 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:30.115 04:02:44 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:30.115 04:02:44 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:30.115 04:02:44 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.115 04:02:44 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.115 04:02:44 -- common/autotest_common.sh@1205 -- # local i=0 00:11:30.115 04:02:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:30.115 04:02:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.115 04:02:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:30.115 04:02:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.115 04:02:44 -- common/autotest_common.sh@1217 -- # return 0 00:11:30.115 04:02:44 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:30.115 04:02:44 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.115 04:02:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:30.115 04:02:44 -- common/autotest_common.sh@10 -- # set +x 00:11:30.115 04:02:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:30.115 04:02:44 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:30.115 04:02:44 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:30.115 04:02:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:30.115 04:02:44 -- nvmf/common.sh@117 -- # sync 00:11:30.115 04:02:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.115 04:02:44 -- nvmf/common.sh@120 -- # set +e 00:11:30.115 04:02:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.115 04:02:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.115 rmmod nvme_tcp 00:11:30.115 rmmod nvme_fabrics 00:11:30.115 rmmod nvme_keyring 00:11:30.115 04:02:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.115 04:02:44 -- nvmf/common.sh@124 -- # set -e 00:11:30.115 04:02:44 -- nvmf/common.sh@125 -- # return 0 00:11:30.115 04:02:44 -- nvmf/common.sh@478 -- # '[' -n 3745708 ']' 00:11:30.115 04:02:44 -- nvmf/common.sh@479 -- # killprocess 3745708 00:11:30.115 04:02:44 -- common/autotest_common.sh@936 -- # '[' -z 3745708 ']' 00:11:30.115 04:02:44 -- common/autotest_common.sh@940 -- # kill -0 3745708 00:11:30.115 04:02:44 -- common/autotest_common.sh@941 -- # uname 00:11:30.115 04:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.115 04:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3745708 00:11:30.115 04:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:30.115 04:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:30.115 04:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3745708' 00:11:30.115 killing process with pid 3745708 00:11:30.115 04:02:44 -- common/autotest_common.sh@955 -- # kill 3745708 00:11:30.115 04:02:44 -- common/autotest_common.sh@960 -- # wait 3745708 00:11:30.115 04:02:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:30.115 04:02:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:30.115 04:02:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.115 04:02:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.115 04:02:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.115 04:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.115 04:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.649 04:02:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.649 00:11:32.649 real 0m13.087s 00:11:32.649 user 0m21.345s 00:11:32.649 sys 0m4.942s 00:11:32.649 04:02:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:32.649 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 ************************************ 00:11:32.649 END TEST nvmf_nvme_cli 00:11:32.649 ************************************ 00:11:32.649 04:02:46 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:32.649 04:02:46 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:32.649 04:02:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:32.649 04:02:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.649 04:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 ************************************ 00:11:32.649 START TEST nvmf_vfio_user 00:11:32.649 ************************************ 00:11:32.649 04:02:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:32.649 * Looking for test storage... 00:11:32.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.649 04:02:46 -- nvmf/common.sh@7 -- # uname -s 00:11:32.649 04:02:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.649 04:02:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.649 04:02:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.649 04:02:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.649 04:02:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.649 04:02:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.649 04:02:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.649 04:02:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.649 04:02:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.649 04:02:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.649 04:02:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:32.649 04:02:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:32.649 04:02:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.649 04:02:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.649 04:02:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.649 04:02:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.649 04:02:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.649 04:02:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.649 04:02:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.649 04:02:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.649 04:02:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.649 04:02:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.649 04:02:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.649 04:02:46 -- paths/export.sh@5 -- # export PATH 00:11:32.649 04:02:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.649 04:02:46 -- nvmf/common.sh@47 -- # : 0 00:11:32.649 04:02:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.649 04:02:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.649 04:02:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.649 04:02:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.649 04:02:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.649 04:02:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.649 04:02:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.649 04:02:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3747167 00:11:32.649 04:02:47 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3747167' 00:11:32.649 Process pid: 3747167 00:11:32.649 04:02:47 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:32.649 04:02:47 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3747167 00:11:32.649 04:02:46 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:32.649 04:02:47 -- common/autotest_common.sh@817 -- # '[' -z 3747167 ']' 00:11:32.649 04:02:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.649 04:02:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:32.649 04:02:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.649 04:02:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:32.649 04:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:32.649 [2024-04-19 04:02:47.050825] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:32.649 [2024-04-19 04:02:47.050882] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.649 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.649 [2024-04-19 04:02:47.131830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.908 [2024-04-19 04:02:47.221203] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.908 [2024-04-19 04:02:47.221246] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.908 [2024-04-19 04:02:47.221256] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.908 [2024-04-19 04:02:47.221265] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.908 [2024-04-19 04:02:47.221273] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.908 [2024-04-19 04:02:47.221320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.908 [2024-04-19 04:02:47.221445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.908 [2024-04-19 04:02:47.221484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.908 [2024-04-19 04:02:47.221485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.908 04:02:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.908 04:02:47 -- common/autotest_common.sh@850 -- # return 0 00:11:32.908 04:02:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:33.843 04:02:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:34.101 04:02:48 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:34.101 04:02:48 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:34.101 04:02:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:34.101 04:02:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:34.101 04:02:48 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:34.359 Malloc1 00:11:34.359 04:02:48 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:34.617 04:02:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:34.875 04:02:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:35.133 04:02:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.133 04:02:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:35.133 04:02:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:35.392 Malloc2 00:11:35.392 04:02:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:35.650 04:02:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:35.908 04:02:50 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:36.169 04:02:50 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:36.169 [2024-04-19 04:02:50.522460] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:36.169 [2024-04-19 04:02:50.522498] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747735 ] 00:11:36.169 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.169 [2024-04-19 04:02:50.559880] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:36.169 [2024-04-19 04:02:50.562611] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.169 [2024-04-19 04:02:50.562637] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f44ad8c2000 00:11:36.169 [2024-04-19 04:02:50.563610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.564612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.565617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.566621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.567626] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.568635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.569640] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.570642] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:36.169 [2024-04-19 04:02:50.571652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:36.169 [2024-04-19 04:02:50.571668] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f44ad8b7000 00:11:36.169 [2024-04-19 04:02:50.573080] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.169 [2024-04-19 04:02:50.593264] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:36.169 [2024-04-19 04:02:50.593297] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:36.169 [2024-04-19 04:02:50.597814] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:36.169 [2024-04-19 04:02:50.597868] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:36.169 [2024-04-19 04:02:50.597971] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:36.169 [2024-04-19 04:02:50.597995] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:36.169 [2024-04-19 04:02:50.598002] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:36.169 [2024-04-19 04:02:50.598809] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:36.169 [2024-04-19 04:02:50.598821] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:36.169 [2024-04-19 04:02:50.598831] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:36.169 [2024-04-19 04:02:50.599814] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:36.169 [2024-04-19 04:02:50.599825] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:36.169 [2024-04-19 04:02:50.599835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.600822] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:36.169 [2024-04-19 04:02:50.600833] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.601834] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:36.169 [2024-04-19 04:02:50.601846] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:36.169 [2024-04-19 04:02:50.601853] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.601861] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.601968] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:36.169 [2024-04-19 04:02:50.601974] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.601981] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:36.169 [2024-04-19 04:02:50.602845] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:36.169 [2024-04-19 04:02:50.603846] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:36.169 [2024-04-19 04:02:50.604850] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:36.169 [2024-04-19 04:02:50.605850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.169 [2024-04-19 04:02:50.605933] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:36.169 [2024-04-19 04:02:50.606868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:36.169 [2024-04-19 04:02:50.606879] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:36.169 [2024-04-19 04:02:50.606885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:36.169 [2024-04-19 04:02:50.606910] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:36.169 [2024-04-19 04:02:50.606920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:36.169 [2024-04-19 04:02:50.606941] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.169 [2024-04-19 04:02:50.606947] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.169 [2024-04-19 04:02:50.606963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.169 [2024-04-19 04:02:50.607012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:36.169 [2024-04-19 04:02:50.607024] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:36.169 [2024-04-19 04:02:50.607031] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:36.169 [2024-04-19 04:02:50.607037] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:36.169 [2024-04-19 04:02:50.607043] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:36.169 [2024-04-19 04:02:50.607049] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:36.169 [2024-04-19 04:02:50.607055] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:36.169 [2024-04-19 04:02:50.607061] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:36.169 [2024-04-19 04:02:50.607071] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:36.169 [2024-04-19 04:02:50.607083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:36.169 [2024-04-19 04:02:50.607098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:36.169 [2024-04-19 04:02:50.607113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.169 [2024-04-19 04:02:50.607124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.169 [2024-04-19 04:02:50.607135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.170 [2024-04-19 04:02:50.607145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.170 [2024-04-19 04:02:50.607151] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607166] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607201] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:36.170 [2024-04-19 04:02:50.607208] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607219] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607226] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607314] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607325] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607334] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:36.170 [2024-04-19 04:02:50.607340] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:36.170 [2024-04-19 04:02:50.607357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607390] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:36.170 [2024-04-19 04:02:50.607405] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607415] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607424] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.170 [2024-04-19 04:02:50.607429] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.170 [2024-04-19 04:02:50.607437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607478] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607488] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607499] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:36.170 [2024-04-19 04:02:50.607505] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.170 [2024-04-19 04:02:50.607513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607541] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607549] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607560] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607567] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607574] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607580] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:36.170 [2024-04-19 04:02:50.607585] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:36.170 [2024-04-19 04:02:50.607592] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:36.170 [2024-04-19 04:02:50.607612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607638] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607718] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:36.170 [2024-04-19 04:02:50.607725] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:36.170 [2024-04-19 04:02:50.607729] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:36.170 [2024-04-19 04:02:50.607734] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:36.170 [2024-04-19 04:02:50.607742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:36.170 [2024-04-19 04:02:50.607751] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:36.170 [2024-04-19 04:02:50.607757] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:36.170 [2024-04-19 04:02:50.607766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607775] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:36.170 [2024-04-19 04:02:50.607781] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:36.170 [2024-04-19 04:02:50.607789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607798] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:36.170 [2024-04-19 04:02:50.607804] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:36.170 [2024-04-19 04:02:50.607811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:36.170 [2024-04-19 04:02:50.607820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:36.170 [2024-04-19 04:02:50.607858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:36.170 ===================================================== 00:11:36.170 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:36.170 ===================================================== 00:11:36.170 Controller Capabilities/Features 00:11:36.170 ================================ 00:11:36.170 Vendor ID: 4e58 00:11:36.170 Subsystem Vendor ID: 4e58 00:11:36.170 Serial Number: SPDK1 00:11:36.170 Model Number: SPDK bdev Controller 00:11:36.170 Firmware Version: 24.05 00:11:36.170 Recommended Arb Burst: 6 00:11:36.170 IEEE OUI Identifier: 8d 6b 50 00:11:36.170 Multi-path I/O 00:11:36.170 May have multiple subsystem ports: Yes 00:11:36.170 May have multiple controllers: Yes 00:11:36.170 Associated with SR-IOV VF: No 00:11:36.170 Max Data Transfer Size: 131072 00:11:36.170 Max Number of Namespaces: 32 00:11:36.170 Max Number of I/O Queues: 127 00:11:36.170 NVMe Specification Version (VS): 1.3 00:11:36.170 NVMe Specification Version (Identify): 1.3 00:11:36.170 Maximum Queue Entries: 256 00:11:36.170 Contiguous Queues Required: Yes 00:11:36.170 Arbitration Mechanisms Supported 00:11:36.170 Weighted Round Robin: Not Supported 00:11:36.171 Vendor Specific: Not Supported 00:11:36.171 Reset Timeout: 15000 ms 00:11:36.171 Doorbell Stride: 4 bytes 00:11:36.171 NVM Subsystem Reset: Not Supported 00:11:36.171 Command Sets Supported 00:11:36.171 NVM Command Set: Supported 00:11:36.171 Boot Partition: Not Supported 00:11:36.171 Memory Page Size Minimum: 4096 bytes 00:11:36.171 Memory Page Size Maximum: 4096 bytes 00:11:36.171 Persistent Memory Region: Not Supported 00:11:36.171 Optional Asynchronous Events Supported 00:11:36.171 Namespace Attribute Notices: Supported 00:11:36.171 Firmware Activation Notices: Not Supported 00:11:36.171 ANA Change Notices: Not Supported 00:11:36.171 PLE Aggregate Log Change Notices: Not Supported 00:11:36.171 LBA Status Info Alert Notices: Not Supported 00:11:36.171 EGE Aggregate Log Change Notices: Not Supported 00:11:36.171 Normal NVM Subsystem Shutdown event: Not Supported 00:11:36.171 Zone Descriptor Change Notices: Not Supported 00:11:36.171 Discovery Log Change Notices: Not Supported 00:11:36.171 Controller Attributes 00:11:36.171 128-bit Host Identifier: Supported 00:11:36.171 Non-Operational Permissive Mode: Not Supported 00:11:36.171 NVM Sets: Not Supported 00:11:36.171 Read Recovery Levels: Not Supported 00:11:36.171 Endurance Groups: Not Supported 00:11:36.171 Predictable Latency Mode: Not Supported 00:11:36.171 Traffic Based Keep ALive: Not Supported 00:11:36.171 Namespace Granularity: Not Supported 00:11:36.171 SQ Associations: Not Supported 00:11:36.171 UUID List: Not Supported 00:11:36.171 Multi-Domain Subsystem: Not Supported 00:11:36.171 Fixed Capacity Management: Not Supported 00:11:36.171 Variable Capacity Management: Not Supported 00:11:36.171 Delete Endurance Group: Not Supported 00:11:36.171 Delete NVM Set: Not Supported 00:11:36.171 Extended LBA Formats Supported: Not Supported 00:11:36.171 Flexible Data Placement Supported: Not Supported 00:11:36.171 00:11:36.171 Controller Memory Buffer Support 00:11:36.171 ================================ 00:11:36.171 Supported: No 00:11:36.171 00:11:36.171 Persistent Memory Region Support 00:11:36.171 ================================ 00:11:36.171 Supported: No 00:11:36.171 00:11:36.171 Admin Command Set Attributes 00:11:36.171 ============================ 00:11:36.171 Security Send/Receive: Not Supported 00:11:36.171 Format NVM: Not Supported 00:11:36.171 Firmware Activate/Download: Not Supported 00:11:36.171 Namespace Management: Not Supported 00:11:36.171 Device Self-Test: Not Supported 00:11:36.171 Directives: Not Supported 00:11:36.171 NVMe-MI: Not Supported 00:11:36.171 Virtualization Management: Not Supported 00:11:36.171 Doorbell Buffer Config: Not Supported 00:11:36.171 Get LBA Status Capability: Not Supported 00:11:36.171 Command & Feature Lockdown Capability: Not Supported 00:11:36.171 Abort Command Limit: 4 00:11:36.171 Async Event Request Limit: 4 00:11:36.171 Number of Firmware Slots: N/A 00:11:36.171 Firmware Slot 1 Read-Only: N/A 00:11:36.171 Firmware Activation Without Reset: N/A 00:11:36.171 Multiple Update Detection Support: N/A 00:11:36.171 Firmware Update Granularity: No Information Provided 00:11:36.171 Per-Namespace SMART Log: No 00:11:36.171 Asymmetric Namespace Access Log Page: Not Supported 00:11:36.171 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:36.171 Command Effects Log Page: Supported 00:11:36.171 Get Log Page Extended Data: Supported 00:11:36.171 Telemetry Log Pages: Not Supported 00:11:36.171 Persistent Event Log Pages: Not Supported 00:11:36.171 Supported Log Pages Log Page: May Support 00:11:36.171 Commands Supported & Effects Log Page: Not Supported 00:11:36.171 Feature Identifiers & Effects Log Page:May Support 00:11:36.171 NVMe-MI Commands & Effects Log Page: May Support 00:11:36.171 Data Area 4 for Telemetry Log: Not Supported 00:11:36.171 Error Log Page Entries Supported: 128 00:11:36.171 Keep Alive: Supported 00:11:36.171 Keep Alive Granularity: 10000 ms 00:11:36.171 00:11:36.171 NVM Command Set Attributes 00:11:36.171 ========================== 00:11:36.171 Submission Queue Entry Size 00:11:36.171 Max: 64 00:11:36.171 Min: 64 00:11:36.171 Completion Queue Entry Size 00:11:36.171 Max: 16 00:11:36.171 Min: 16 00:11:36.171 Number of Namespaces: 32 00:11:36.171 Compare Command: Supported 00:11:36.171 Write Uncorrectable Command: Not Supported 00:11:36.171 Dataset Management Command: Supported 00:11:36.171 Write Zeroes Command: Supported 00:11:36.171 Set Features Save Field: Not Supported 00:11:36.171 Reservations: Not Supported 00:11:36.171 Timestamp: Not Supported 00:11:36.171 Copy: Supported 00:11:36.171 Volatile Write Cache: Present 00:11:36.171 Atomic Write Unit (Normal): 1 00:11:36.171 Atomic Write Unit (PFail): 1 00:11:36.171 Atomic Compare & Write Unit: 1 00:11:36.171 Fused Compare & Write: Supported 00:11:36.171 Scatter-Gather List 00:11:36.171 SGL Command Set: Supported (Dword aligned) 00:11:36.171 SGL Keyed: Not Supported 00:11:36.171 SGL Bit Bucket Descriptor: Not Supported 00:11:36.171 SGL Metadata Pointer: Not Supported 00:11:36.171 Oversized SGL: Not Supported 00:11:36.171 SGL Metadata Address: Not Supported 00:11:36.171 SGL Offset: Not Supported 00:11:36.171 Transport SGL Data Block: Not Supported 00:11:36.171 Replay Protected Memory Block: Not Supported 00:11:36.171 00:11:36.171 Firmware Slot Information 00:11:36.171 ========================= 00:11:36.171 Active slot: 1 00:11:36.171 Slot 1 Firmware Revision: 24.05 00:11:36.171 00:11:36.171 00:11:36.171 Commands Supported and Effects 00:11:36.171 ============================== 00:11:36.171 Admin Commands 00:11:36.171 -------------- 00:11:36.171 Get Log Page (02h): Supported 00:11:36.171 Identify (06h): Supported 00:11:36.171 Abort (08h): Supported 00:11:36.171 Set Features (09h): Supported 00:11:36.171 Get Features (0Ah): Supported 00:11:36.171 Asynchronous Event Request (0Ch): Supported 00:11:36.171 Keep Alive (18h): Supported 00:11:36.171 I/O Commands 00:11:36.171 ------------ 00:11:36.171 Flush (00h): Supported LBA-Change 00:11:36.171 Write (01h): Supported LBA-Change 00:11:36.171 Read (02h): Supported 00:11:36.171 Compare (05h): Supported 00:11:36.171 Write Zeroes (08h): Supported LBA-Change 00:11:36.171 Dataset Management (09h): Supported LBA-Change 00:11:36.171 Copy (19h): Supported LBA-Change 00:11:36.171 Unknown (79h): Supported LBA-Change 00:11:36.171 Unknown (7Ah): Supported 00:11:36.171 00:11:36.171 Error Log 00:11:36.171 ========= 00:11:36.171 00:11:36.171 Arbitration 00:11:36.171 =========== 00:11:36.171 Arbitration Burst: 1 00:11:36.171 00:11:36.171 Power Management 00:11:36.171 ================ 00:11:36.171 Number of Power States: 1 00:11:36.171 Current Power State: Power State #0 00:11:36.171 Power State #0: 00:11:36.171 Max Power: 0.00 W 00:11:36.171 Non-Operational State: Operational 00:11:36.171 Entry Latency: Not Reported 00:11:36.171 Exit Latency: Not Reported 00:11:36.171 Relative Read Throughput: 0 00:11:36.171 Relative Read Latency: 0 00:11:36.171 Relative Write Throughput: 0 00:11:36.171 Relative Write Latency: 0 00:11:36.171 Idle Power: Not Reported 00:11:36.171 Active Power: Not Reported 00:11:36.171 Non-Operational Permissive Mode: Not Supported 00:11:36.171 00:11:36.171 Health Information 00:11:36.171 ================== 00:11:36.171 Critical Warnings: 00:11:36.171 Available Spare Space: OK 00:11:36.171 Temperature: OK 00:11:36.171 Device Reliability: OK 00:11:36.171 Read Only: No 00:11:36.171 Volatile Memory Backup: OK 00:11:36.171 Current Temperature: 0 Kelvin (-2[2024-04-19 04:02:50.607984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:36.171 [2024-04-19 04:02:50.608000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:36.171 [2024-04-19 04:02:50.608029] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:36.171 [2024-04-19 04:02:50.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.171 [2024-04-19 04:02:50.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.171 [2024-04-19 04:02:50.608057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.172 [2024-04-19 04:02:50.608066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.172 [2024-04-19 04:02:50.608881] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:36.172 [2024-04-19 04:02:50.608896] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:36.172 [2024-04-19 04:02:50.609880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.172 [2024-04-19 04:02:50.609943] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:36.172 [2024-04-19 04:02:50.609952] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:36.172 [2024-04-19 04:02:50.610891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:36.172 [2024-04-19 04:02:50.610905] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:36.172 [2024-04-19 04:02:50.610966] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:36.172 [2024-04-19 04:02:50.616355] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:36.172 73 Celsius) 00:11:36.172 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:36.172 Available Spare: 0% 00:11:36.172 Available Spare Threshold: 0% 00:11:36.172 Life Percentage Used: 0% 00:11:36.172 Data Units Read: 0 00:11:36.172 Data Units Written: 0 00:11:36.172 Host Read Commands: 0 00:11:36.172 Host Write Commands: 0 00:11:36.172 Controller Busy Time: 0 minutes 00:11:36.172 Power Cycles: 0 00:11:36.172 Power On Hours: 0 hours 00:11:36.172 Unsafe Shutdowns: 0 00:11:36.172 Unrecoverable Media Errors: 0 00:11:36.172 Lifetime Error Log Entries: 0 00:11:36.172 Warning Temperature Time: 0 minutes 00:11:36.172 Critical Temperature Time: 0 minutes 00:11:36.172 00:11:36.172 Number of Queues 00:11:36.172 ================ 00:11:36.172 Number of I/O Submission Queues: 127 00:11:36.172 Number of I/O Completion Queues: 127 00:11:36.172 00:11:36.172 Active Namespaces 00:11:36.172 ================= 00:11:36.172 Namespace ID:1 00:11:36.172 Error Recovery Timeout: Unlimited 00:11:36.172 Command Set Identifier: NVM (00h) 00:11:36.172 Deallocate: Supported 00:11:36.172 Deallocated/Unwritten Error: Not Supported 00:11:36.172 Deallocated Read Value: Unknown 00:11:36.172 Deallocate in Write Zeroes: Not Supported 00:11:36.172 Deallocated Guard Field: 0xFFFF 00:11:36.172 Flush: Supported 00:11:36.172 Reservation: Supported 00:11:36.172 Namespace Sharing Capabilities: Multiple Controllers 00:11:36.172 Size (in LBAs): 131072 (0GiB) 00:11:36.172 Capacity (in LBAs): 131072 (0GiB) 00:11:36.172 Utilization (in LBAs): 131072 (0GiB) 00:11:36.172 NGUID: F257B7F060D64A029C8ED98D9693E56B 00:11:36.172 UUID: f257b7f0-60d6-4a02-9c8e-d98d9693e56b 00:11:36.172 Thin Provisioning: Not Supported 00:11:36.172 Per-NS Atomic Units: Yes 00:11:36.172 Atomic Boundary Size (Normal): 0 00:11:36.172 Atomic Boundary Size (PFail): 0 00:11:36.172 Atomic Boundary Offset: 0 00:11:36.172 Maximum Single Source Range Length: 65535 00:11:36.172 Maximum Copy Length: 65535 00:11:36.172 Maximum Source Range Count: 1 00:11:36.172 NGUID/EUI64 Never Reused: No 00:11:36.172 Namespace Write Protected: No 00:11:36.172 Number of LBA Formats: 1 00:11:36.172 Current LBA Format: LBA Format #00 00:11:36.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:36.172 00:11:36.172 04:02:50 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:36.431 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.431 [2024-04-19 04:02:50.856234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:41.696 [2024-04-19 04:02:55.876461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:41.696 Initializing NVMe Controllers 00:11:41.696 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:41.696 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:41.696 Initialization complete. Launching workers. 00:11:41.696 ======================================================== 00:11:41.696 Latency(us) 00:11:41.696 Device Information : IOPS MiB/s Average min max 00:11:41.696 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24279.49 94.84 5270.90 1437.60 9518.82 00:11:41.696 ======================================================== 00:11:41.696 Total : 24279.49 94.84 5270.90 1437.60 9518.82 00:11:41.696 00:11:41.696 04:02:55 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:41.696 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.696 [2024-04-19 04:02:56.121742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:46.969 [2024-04-19 04:03:01.167661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:46.969 Initializing NVMe Controllers 00:11:46.969 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:46.969 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:46.969 Initialization complete. Launching workers. 00:11:46.969 ======================================================== 00:11:46.969 Latency(us) 00:11:46.969 Device Information : IOPS MiB/s Average min max 00:11:46.969 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.60 62.70 7979.76 7700.65 8255.04 00:11:46.969 ======================================================== 00:11:46.969 Total : 16050.60 62.70 7979.76 7700.65 8255.04 00:11:46.969 00:11:46.969 04:03:01 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.969 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.969 [2024-04-19 04:03:01.411898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:52.271 [2024-04-19 04:03:06.487680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:52.271 Initializing NVMe Controllers 00:11:52.271 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:52.271 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:52.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:52.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:52.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:52.271 Initialization complete. Launching workers. 00:11:52.271 Starting thread on core 2 00:11:52.271 Starting thread on core 3 00:11:52.271 Starting thread on core 1 00:11:52.271 04:03:06 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:52.271 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.530 [2024-04-19 04:03:06.813086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.907 [2024-04-19 04:03:09.876089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.907 Initializing NVMe Controllers 00:11:55.907 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:55.907 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:55.907 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:55.907 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:55.907 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:55.907 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:55.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:55.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:55.907 Initialization complete. Launching workers. 00:11:55.907 Starting thread on core 1 with urgent priority queue 00:11:55.907 Starting thread on core 2 with urgent priority queue 00:11:55.907 Starting thread on core 3 with urgent priority queue 00:11:55.907 Starting thread on core 0 with urgent priority queue 00:11:55.907 SPDK bdev Controller (SPDK1 ) core 0: 7422.67 IO/s 13.47 secs/100000 ios 00:11:55.907 SPDK bdev Controller (SPDK1 ) core 1: 6424.67 IO/s 15.57 secs/100000 ios 00:11:55.907 SPDK bdev Controller (SPDK1 ) core 2: 6208.33 IO/s 16.11 secs/100000 ios 00:11:55.907 SPDK bdev Controller (SPDK1 ) core 3: 6064.00 IO/s 16.49 secs/100000 ios 00:11:55.907 ======================================================== 00:11:55.907 00:11:55.907 04:03:09 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:55.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.907 [2024-04-19 04:03:10.201923] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:55.907 [2024-04-19 04:03:10.238449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:55.907 Initializing NVMe Controllers 00:11:55.907 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:55.907 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:55.907 Namespace ID: 1 size: 0GB 00:11:55.907 Initialization complete. 00:11:55.907 INFO: using host memory buffer for IO 00:11:55.907 Hello world! 00:11:55.907 04:03:10 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:55.907 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.165 [2024-04-19 04:03:10.557862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.102 Initializing NVMe Controllers 00:11:57.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.102 Initialization complete. Launching workers. 00:11:57.102 submit (in ns) avg, min, max = 9658.7, 4499.1, 4004000.0 00:11:57.102 complete (in ns) avg, min, max = 20391.6, 2680.9, 4002128.2 00:11:57.102 00:11:57.102 Submit histogram 00:11:57.102 ================ 00:11:57.102 Range in us Cumulative Count 00:11:57.102 4.480 - 4.509: 0.0465% ( 8) 00:11:57.102 4.509 - 4.538: 0.3602% ( 54) 00:11:57.102 4.538 - 4.567: 1.3941% ( 178) 00:11:57.102 4.567 - 4.596: 3.3982% ( 345) 00:11:57.102 4.596 - 4.625: 7.1798% ( 651) 00:11:57.102 4.625 - 4.655: 13.4708% ( 1083) 00:11:57.102 4.655 - 4.684: 24.5542% ( 1908) 00:11:57.102 4.684 - 4.713: 36.7935% ( 2107) 00:11:57.102 4.713 - 4.742: 48.7830% ( 2064) 00:11:57.102 4.742 - 4.771: 59.2623% ( 1804) 00:11:57.102 4.771 - 4.800: 69.0502% ( 1685) 00:11:57.102 4.800 - 4.829: 77.2814% ( 1417) 00:11:57.102 4.829 - 4.858: 82.3816% ( 878) 00:11:57.102 4.858 - 4.887: 85.4081% ( 521) 00:11:57.102 4.887 - 4.916: 87.2553% ( 318) 00:11:57.102 4.916 - 4.945: 88.9341% ( 289) 00:11:57.102 4.945 - 4.975: 90.8742% ( 334) 00:11:57.102 4.975 - 5.004: 92.7912% ( 330) 00:11:57.102 5.004 - 5.033: 94.7081% ( 330) 00:11:57.102 5.033 - 5.062: 96.4450% ( 299) 00:11:57.102 5.062 - 5.091: 97.4964% ( 181) 00:11:57.102 5.091 - 5.120: 98.2690% ( 133) 00:11:57.102 5.120 - 5.149: 98.8847% ( 106) 00:11:57.102 5.149 - 5.178: 99.2448% ( 62) 00:11:57.102 5.178 - 5.207: 99.3959% ( 26) 00:11:57.102 5.207 - 5.236: 99.4656% ( 12) 00:11:57.102 5.236 - 5.265: 99.5004% ( 6) 00:11:57.102 5.265 - 5.295: 99.5179% ( 3) 00:11:57.102 5.295 - 5.324: 99.5295% ( 2) 00:11:57.102 5.324 - 5.353: 99.5353% ( 1) 00:11:57.102 5.353 - 5.382: 99.5411% ( 1) 00:11:57.102 5.382 - 5.411: 99.5469% ( 1) 00:11:57.102 5.527 - 5.556: 99.5527% ( 1) 00:11:57.102 6.051 - 6.080: 99.5585% ( 1) 00:11:57.102 7.215 - 7.244: 99.5643% ( 1) 00:11:57.102 7.447 - 7.505: 99.5701% ( 1) 00:11:57.102 7.564 - 7.622: 99.5760% ( 1) 00:11:57.102 7.622 - 7.680: 99.5818% ( 1) 00:11:57.102 7.796 - 7.855: 99.5876% ( 1) 00:11:57.102 7.855 - 7.913: 99.5934% ( 1) 00:11:57.102 8.320 - 8.378: 99.5992% ( 1) 00:11:57.102 8.553 - 8.611: 99.6050% ( 1) 00:11:57.102 8.611 - 8.669: 99.6108% ( 1) 00:11:57.102 8.669 - 8.727: 99.6340% ( 4) 00:11:57.102 8.785 - 8.844: 99.6398% ( 1) 00:11:57.102 8.844 - 8.902: 99.6457% ( 1) 00:11:57.102 8.902 - 8.960: 99.6515% ( 1) 00:11:57.102 8.960 - 9.018: 99.6573% ( 1) 00:11:57.102 9.018 - 9.076: 99.6631% ( 1) 00:11:57.102 9.135 - 9.193: 99.6689% ( 1) 00:11:57.102 9.193 - 9.251: 99.6805% ( 2) 00:11:57.102 9.251 - 9.309: 99.6921% ( 2) 00:11:57.102 9.309 - 9.367: 99.6979% ( 1) 00:11:57.102 9.367 - 9.425: 99.7037% ( 1) 00:11:57.102 9.425 - 9.484: 99.7154% ( 2) 00:11:57.102 9.484 - 9.542: 99.7212% ( 1) 00:11:57.102 9.600 - 9.658: 99.7328% ( 2) 00:11:57.102 9.891 - 9.949: 99.7444% ( 2) 00:11:57.102 9.949 - 10.007: 99.7502% ( 1) 00:11:57.102 10.007 - 10.065: 99.7560% ( 1) 00:11:57.102 10.065 - 10.124: 99.7618% ( 1) 00:11:57.102 10.124 - 10.182: 99.7793% ( 3) 00:11:57.102 10.298 - 10.356: 99.7851% ( 1) 00:11:57.102 10.356 - 10.415: 99.7909% ( 1) 00:11:57.102 10.415 - 10.473: 99.7967% ( 1) 00:11:57.102 10.473 - 10.531: 99.8025% ( 1) 00:11:57.102 10.531 - 10.589: 99.8083% ( 1) 00:11:57.102 10.764 - 10.822: 99.8141% ( 1) 00:11:57.102 10.996 - 11.055: 99.8257% ( 2) 00:11:57.102 11.055 - 11.113: 99.8374% ( 2) 00:11:57.102 11.113 - 11.171: 99.8432% ( 1) 00:11:57.102 11.753 - 11.811: 99.8490% ( 1) 00:11:57.102 11.985 - 12.044: 99.8548% ( 1) 00:11:57.102 12.160 - 12.218: 99.8606% ( 1) 00:11:57.102 12.335 - 12.393: 99.8664% ( 1) 00:11:57.102 12.451 - 12.509: 99.8722% ( 1) 00:11:57.102 15.011 - 15.127: 99.8780% ( 1) 00:11:57.102 3991.738 - 4021.527: 100.0000% ( 21) 00:11:57.102 00:11:57.102 Complete histogram 00:11:57.102 ================== 00:11:57.102 Ra[2024-04-19 04:03:11.580901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.361 nge in us Cumulative Count 00:11:57.361 2.676 - 2.691: 0.2498% ( 43) 00:11:57.361 2.691 - 2.705: 12.2568% ( 2067) 00:11:57.361 2.705 - 2.720: 53.6102% ( 7119) 00:11:57.361 2.720 - 2.735: 72.0186% ( 3169) 00:11:57.361 2.735 - 2.749: 75.5911% ( 615) 00:11:57.361 2.749 - 2.764: 81.3651% ( 994) 00:11:57.361 2.764 - 2.778: 86.3085% ( 851) 00:11:57.361 2.778 - 2.793: 91.3041% ( 860) 00:11:57.361 2.793 - 2.807: 95.4168% ( 708) 00:11:57.361 2.807 - 2.822: 97.1595% ( 300) 00:11:57.361 2.822 - 2.836: 97.9437% ( 135) 00:11:57.361 2.836 - 2.851: 98.6175% ( 116) 00:11:57.361 2.851 - 2.865: 98.9254% ( 53) 00:11:57.361 2.865 - 2.880: 99.0473% ( 21) 00:11:57.361 2.880 - 2.895: 99.0880% ( 7) 00:11:57.361 2.895 - 2.909: 99.1054% ( 3) 00:11:57.361 2.909 - 2.924: 99.1403% ( 6) 00:11:57.361 2.924 - 2.938: 99.1519% ( 2) 00:11:57.361 2.953 - 2.967: 99.1809% ( 5) 00:11:57.361 2.967 - 2.982: 99.2042% ( 4) 00:11:57.361 2.982 - 2.996: 99.2158% ( 2) 00:11:57.361 3.011 - 3.025: 99.2332% ( 3) 00:11:57.361 3.025 - 3.040: 99.2390% ( 1) 00:11:57.361 3.055 - 3.069: 99.2448% ( 1) 00:11:57.361 3.084 - 3.098: 99.2507% ( 1) 00:11:57.361 3.098 - 3.113: 99.2565% ( 1) 00:11:57.361 3.156 - 3.171: 99.2623% ( 1) 00:11:57.361 5.498 - 5.527: 99.2681% ( 1) 00:11:57.361 5.644 - 5.673: 99.2739% ( 1) 00:11:57.361 6.109 - 6.138: 99.2797% ( 1) 00:11:57.361 6.138 - 6.167: 99.2855% ( 1) 00:11:57.361 6.167 - 6.196: 99.2913% ( 1) 00:11:57.361 6.196 - 6.225: 99.2971% ( 1) 00:11:57.361 6.225 - 6.255: 99.3029% ( 1) 00:11:57.361 6.284 - 6.313: 99.3087% ( 1) 00:11:57.361 6.342 - 6.371: 99.3146% ( 1) 00:11:57.361 6.400 - 6.429: 99.3204% ( 1) 00:11:57.361 6.516 - 6.545: 99.3262% ( 1) 00:11:57.361 6.545 - 6.575: 99.3320% ( 1) 00:11:57.361 6.604 - 6.633: 99.3378% ( 1) 00:11:57.361 6.633 - 6.662: 99.3436% ( 1) 00:11:57.361 6.691 - 6.720: 99.3494% ( 1) 00:11:57.361 6.720 - 6.749: 99.3610% ( 2) 00:11:57.361 6.778 - 6.807: 99.3668% ( 1) 00:11:57.361 6.836 - 6.865: 99.3726% ( 1) 00:11:57.361 6.865 - 6.895: 99.3784% ( 1) 00:11:57.361 7.040 - 7.069: 99.3901% ( 2) 00:11:57.361 7.156 - 7.185: 99.3959% ( 1) 00:11:57.361 7.302 - 7.331: 99.4017% ( 1) 00:11:57.361 7.564 - 7.622: 99.4075% ( 1) 00:11:57.361 7.738 - 7.796: 99.4133% ( 1) 00:11:57.361 7.855 - 7.913: 99.4191% ( 1) 00:11:57.361 7.913 - 7.971: 99.4249% ( 1) 00:11:57.361 7.971 - 8.029: 99.4307% ( 1) 00:11:57.361 8.087 - 8.145: 99.4365% ( 1) 00:11:57.361 8.145 - 8.204: 99.4423% ( 1) 00:11:57.361 8.204 - 8.262: 99.4482% ( 1) 00:11:57.361 8.320 - 8.378: 99.4540% ( 1) 00:11:57.361 8.785 - 8.844: 99.4656% ( 2) 00:11:57.361 8.960 - 9.018: 99.4714% ( 1) 00:11:57.361 9.076 - 9.135: 99.4772% ( 1) 00:11:57.361 9.135 - 9.193: 99.4946% ( 3) 00:11:57.361 9.425 - 9.484: 99.5004% ( 1) 00:11:57.361 9.542 - 9.600: 99.5121% ( 2) 00:11:57.361 9.716 - 9.775: 99.5179% ( 1) 00:11:57.361 9.775 - 9.833: 99.5237% ( 1) 00:11:57.361 10.124 - 10.182: 99.5295% ( 1) 00:11:57.361 10.356 - 10.415: 99.5353% ( 1) 00:11:57.361 10.764 - 10.822: 99.5411% ( 1) 00:11:57.361 11.055 - 11.113: 99.5469% ( 1) 00:11:57.361 12.451 - 12.509: 99.5527% ( 1) 00:11:57.361 13.091 - 13.149: 99.5585% ( 1) 00:11:57.361 3991.738 - 4021.527: 100.0000% ( 76) 00:11:57.361 00:11:57.361 04:03:11 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:57.361 04:03:11 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:57.361 04:03:11 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:57.361 04:03:11 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:57.361 04:03:11 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.361 [2024-04-19 04:03:11.864354] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:57.361 [ 00:11:57.361 { 00:11:57.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.361 "subtype": "Discovery", 00:11:57.361 "listen_addresses": [], 00:11:57.361 "allow_any_host": true, 00:11:57.361 "hosts": [] 00:11:57.361 }, 00:11:57.361 { 00:11:57.361 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.361 "subtype": "NVMe", 00:11:57.361 "listen_addresses": [ 00:11:57.361 { 00:11:57.361 "transport": "VFIOUSER", 00:11:57.361 "trtype": "VFIOUSER", 00:11:57.361 "adrfam": "IPv4", 00:11:57.362 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.362 "trsvcid": "0" 00:11:57.362 } 00:11:57.362 ], 00:11:57.362 "allow_any_host": true, 00:11:57.362 "hosts": [], 00:11:57.362 "serial_number": "SPDK1", 00:11:57.362 "model_number": "SPDK bdev Controller", 00:11:57.362 "max_namespaces": 32, 00:11:57.362 "min_cntlid": 1, 00:11:57.362 "max_cntlid": 65519, 00:11:57.362 "namespaces": [ 00:11:57.362 { 00:11:57.362 "nsid": 1, 00:11:57.362 "bdev_name": "Malloc1", 00:11:57.362 "name": "Malloc1", 00:11:57.362 "nguid": "F257B7F060D64A029C8ED98D9693E56B", 00:11:57.362 "uuid": "f257b7f0-60d6-4a02-9c8e-d98d9693e56b" 00:11:57.362 } 00:11:57.362 ] 00:11:57.362 }, 00:11:57.362 { 00:11:57.362 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.362 "subtype": "NVMe", 00:11:57.362 "listen_addresses": [ 00:11:57.362 { 00:11:57.362 "transport": "VFIOUSER", 00:11:57.362 "trtype": "VFIOUSER", 00:11:57.362 "adrfam": "IPv4", 00:11:57.362 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.362 "trsvcid": "0" 00:11:57.362 } 00:11:57.362 ], 00:11:57.362 "allow_any_host": true, 00:11:57.362 "hosts": [], 00:11:57.362 "serial_number": "SPDK2", 00:11:57.362 "model_number": "SPDK bdev Controller", 00:11:57.362 "max_namespaces": 32, 00:11:57.362 "min_cntlid": 1, 00:11:57.362 "max_cntlid": 65519, 00:11:57.362 "namespaces": [ 00:11:57.362 { 00:11:57.362 "nsid": 1, 00:11:57.362 "bdev_name": "Malloc2", 00:11:57.362 "name": "Malloc2", 00:11:57.362 "nguid": "ED9A97D017A54EC993002671B6B6927A", 00:11:57.362 "uuid": "ed9a97d0-17a5-4ec9-9300-2671b6b6927a" 00:11:57.362 } 00:11:57.362 ] 00:11:57.362 } 00:11:57.362 ] 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3752160 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:57.620 04:03:11 -- common/autotest_common.sh@1251 -- # local i=0 00:11:57.620 04:03:11 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.620 04:03:11 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:57.620 04:03:11 -- common/autotest_common.sh@1262 -- # return 0 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:57.620 04:03:11 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:57.620 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.620 [2024-04-19 04:03:12.074855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.879 Malloc3 00:11:57.879 04:03:12 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:57.879 [2024-04-19 04:03:12.399614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.138 Asynchronous Event Request test 00:11:58.138 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.138 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.138 Registering asynchronous event callbacks... 00:11:58.138 Starting namespace attribute notice tests for all controllers... 00:11:58.138 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:58.138 aer_cb - Changed Namespace 00:11:58.138 Cleaning up... 00:11:58.138 [ 00:11:58.138 { 00:11:58.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.138 "subtype": "Discovery", 00:11:58.138 "listen_addresses": [], 00:11:58.138 "allow_any_host": true, 00:11:58.138 "hosts": [] 00:11:58.138 }, 00:11:58.138 { 00:11:58.138 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.138 "subtype": "NVMe", 00:11:58.138 "listen_addresses": [ 00:11:58.138 { 00:11:58.138 "transport": "VFIOUSER", 00:11:58.138 "trtype": "VFIOUSER", 00:11:58.138 "adrfam": "IPv4", 00:11:58.138 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.138 "trsvcid": "0" 00:11:58.138 } 00:11:58.138 ], 00:11:58.138 "allow_any_host": true, 00:11:58.138 "hosts": [], 00:11:58.138 "serial_number": "SPDK1", 00:11:58.138 "model_number": "SPDK bdev Controller", 00:11:58.138 "max_namespaces": 32, 00:11:58.138 "min_cntlid": 1, 00:11:58.138 "max_cntlid": 65519, 00:11:58.138 "namespaces": [ 00:11:58.138 { 00:11:58.138 "nsid": 1, 00:11:58.138 "bdev_name": "Malloc1", 00:11:58.138 "name": "Malloc1", 00:11:58.138 "nguid": "F257B7F060D64A029C8ED98D9693E56B", 00:11:58.138 "uuid": "f257b7f0-60d6-4a02-9c8e-d98d9693e56b" 00:11:58.138 }, 00:11:58.138 { 00:11:58.138 "nsid": 2, 00:11:58.138 "bdev_name": "Malloc3", 00:11:58.138 "name": "Malloc3", 00:11:58.138 "nguid": "EB9CE296255E4B8C825E1174ECEF1DDE", 00:11:58.138 "uuid": "eb9ce296-255e-4b8c-825e-1174ecef1dde" 00:11:58.138 } 00:11:58.138 ] 00:11:58.138 }, 00:11:58.138 { 00:11:58.138 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.138 "subtype": "NVMe", 00:11:58.138 "listen_addresses": [ 00:11:58.138 { 00:11:58.138 "transport": "VFIOUSER", 00:11:58.138 "trtype": "VFIOUSER", 00:11:58.138 "adrfam": "IPv4", 00:11:58.138 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.138 "trsvcid": "0" 00:11:58.138 } 00:11:58.138 ], 00:11:58.138 "allow_any_host": true, 00:11:58.138 "hosts": [], 00:11:58.138 "serial_number": "SPDK2", 00:11:58.138 "model_number": "SPDK bdev Controller", 00:11:58.138 "max_namespaces": 32, 00:11:58.138 "min_cntlid": 1, 00:11:58.138 "max_cntlid": 65519, 00:11:58.138 "namespaces": [ 00:11:58.138 { 00:11:58.138 "nsid": 1, 00:11:58.138 "bdev_name": "Malloc2", 00:11:58.138 "name": "Malloc2", 00:11:58.138 "nguid": "ED9A97D017A54EC993002671B6B6927A", 00:11:58.138 "uuid": "ed9a97d0-17a5-4ec9-9300-2671b6b6927a" 00:11:58.138 } 00:11:58.138 ] 00:11:58.138 } 00:11:58.138 ] 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@44 -- # wait 3752160 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:58.138 04:03:12 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:58.138 [2024-04-19 04:03:12.627801] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:11:58.138 [2024-04-19 04:03:12.627847] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752311 ] 00:11:58.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.138 [2024-04-19 04:03:12.663753] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:58.397 [2024-04-19 04:03:12.671583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:58.397 [2024-04-19 04:03:12.671611] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9fa64e6000 00:11:58.397 [2024-04-19 04:03:12.672581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.673587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.674598] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.675606] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.676610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.677622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.678623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.679632] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:58.397 [2024-04-19 04:03:12.680644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:58.397 [2024-04-19 04:03:12.680662] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9fa64db000 00:11:58.397 [2024-04-19 04:03:12.682073] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:58.397 [2024-04-19 04:03:12.704045] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:58.397 [2024-04-19 04:03:12.704075] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:58.397 [2024-04-19 04:03:12.706157] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:58.398 [2024-04-19 04:03:12.706213] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:58.398 [2024-04-19 04:03:12.706310] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:58.398 [2024-04-19 04:03:12.706331] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:58.398 [2024-04-19 04:03:12.706338] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:58.398 [2024-04-19 04:03:12.707157] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:58.398 [2024-04-19 04:03:12.707170] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:58.398 [2024-04-19 04:03:12.707180] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:58.398 [2024-04-19 04:03:12.708174] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:58.398 [2024-04-19 04:03:12.708187] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:58.398 [2024-04-19 04:03:12.708197] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.709180] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:58.398 [2024-04-19 04:03:12.709192] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.710192] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:58.398 [2024-04-19 04:03:12.710204] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:58.398 [2024-04-19 04:03:12.710211] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.710220] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.710327] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:58.398 [2024-04-19 04:03:12.710333] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.710339] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:58.398 [2024-04-19 04:03:12.711198] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:58.398 [2024-04-19 04:03:12.712213] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:58.398 [2024-04-19 04:03:12.713218] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:58.398 [2024-04-19 04:03:12.714221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.398 [2024-04-19 04:03:12.714270] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:58.398 [2024-04-19 04:03:12.715234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:58.398 [2024-04-19 04:03:12.715245] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:58.398 [2024-04-19 04:03:12.715251] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.715277] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:58.398 [2024-04-19 04:03:12.715288] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.715306] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.398 [2024-04-19 04:03:12.715313] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.398 [2024-04-19 04:03:12.715327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.723354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.723369] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:58.398 [2024-04-19 04:03:12.723375] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:58.398 [2024-04-19 04:03:12.723381] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:58.398 [2024-04-19 04:03:12.723387] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:58.398 [2024-04-19 04:03:12.723396] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:58.398 [2024-04-19 04:03:12.723402] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:58.398 [2024-04-19 04:03:12.723408] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.723418] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.723431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.731353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.731374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.398 [2024-04-19 04:03:12.731386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.398 [2024-04-19 04:03:12.731396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.398 [2024-04-19 04:03:12.731407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:58.398 [2024-04-19 04:03:12.731413] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.731424] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.731436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.739352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.739363] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:58.398 [2024-04-19 04:03:12.739370] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.739382] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.739390] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.739401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.747351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.747414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.747425] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.747435] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:58.398 [2024-04-19 04:03:12.747441] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:58.398 [2024-04-19 04:03:12.747450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.755352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.755367] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:58.398 [2024-04-19 04:03:12.755379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.755389] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.755399] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.398 [2024-04-19 04:03:12.755405] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.398 [2024-04-19 04:03:12.755412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.763351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.763370] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.763379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.763389] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:58.398 [2024-04-19 04:03:12.763395] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.398 [2024-04-19 04:03:12.763404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.398 [2024-04-19 04:03:12.771354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:58.398 [2024-04-19 04:03:12.771368] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.771376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:58.398 [2024-04-19 04:03:12.771387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:58.399 [2024-04-19 04:03:12.771395] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:58.399 [2024-04-19 04:03:12.771402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:58.399 [2024-04-19 04:03:12.771408] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:58.399 [2024-04-19 04:03:12.771414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:58.399 [2024-04-19 04:03:12.771421] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:58.399 [2024-04-19 04:03:12.771440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.779353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.779371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.787352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.787369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.795350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.795367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.803351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.803368] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:58.399 [2024-04-19 04:03:12.803374] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:58.399 [2024-04-19 04:03:12.803379] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:58.399 [2024-04-19 04:03:12.803383] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:58.399 [2024-04-19 04:03:12.803392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:58.399 [2024-04-19 04:03:12.803401] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:58.399 [2024-04-19 04:03:12.803407] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:58.399 [2024-04-19 04:03:12.803415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.803424] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:58.399 [2024-04-19 04:03:12.803430] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:58.399 [2024-04-19 04:03:12.803437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.803447] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:58.399 [2024-04-19 04:03:12.803452] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:58.399 [2024-04-19 04:03:12.803460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:58.399 [2024-04-19 04:03:12.811352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.811372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.811383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:58.399 [2024-04-19 04:03:12.811392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:58.399 ===================================================== 00:11:58.399 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:58.399 ===================================================== 00:11:58.399 Controller Capabilities/Features 00:11:58.399 ================================ 00:11:58.399 Vendor ID: 4e58 00:11:58.399 Subsystem Vendor ID: 4e58 00:11:58.399 Serial Number: SPDK2 00:11:58.399 Model Number: SPDK bdev Controller 00:11:58.399 Firmware Version: 24.05 00:11:58.399 Recommended Arb Burst: 6 00:11:58.399 IEEE OUI Identifier: 8d 6b 50 00:11:58.399 Multi-path I/O 00:11:58.399 May have multiple subsystem ports: Yes 00:11:58.399 May have multiple controllers: Yes 00:11:58.399 Associated with SR-IOV VF: No 00:11:58.399 Max Data Transfer Size: 131072 00:11:58.399 Max Number of Namespaces: 32 00:11:58.399 Max Number of I/O Queues: 127 00:11:58.399 NVMe Specification Version (VS): 1.3 00:11:58.399 NVMe Specification Version (Identify): 1.3 00:11:58.399 Maximum Queue Entries: 256 00:11:58.399 Contiguous Queues Required: Yes 00:11:58.399 Arbitration Mechanisms Supported 00:11:58.399 Weighted Round Robin: Not Supported 00:11:58.399 Vendor Specific: Not Supported 00:11:58.399 Reset Timeout: 15000 ms 00:11:58.399 Doorbell Stride: 4 bytes 00:11:58.399 NVM Subsystem Reset: Not Supported 00:11:58.399 Command Sets Supported 00:11:58.399 NVM Command Set: Supported 00:11:58.399 Boot Partition: Not Supported 00:11:58.399 Memory Page Size Minimum: 4096 bytes 00:11:58.399 Memory Page Size Maximum: 4096 bytes 00:11:58.399 Persistent Memory Region: Not Supported 00:11:58.399 Optional Asynchronous Events Supported 00:11:58.399 Namespace Attribute Notices: Supported 00:11:58.399 Firmware Activation Notices: Not Supported 00:11:58.399 ANA Change Notices: Not Supported 00:11:58.399 PLE Aggregate Log Change Notices: Not Supported 00:11:58.399 LBA Status Info Alert Notices: Not Supported 00:11:58.399 EGE Aggregate Log Change Notices: Not Supported 00:11:58.399 Normal NVM Subsystem Shutdown event: Not Supported 00:11:58.399 Zone Descriptor Change Notices: Not Supported 00:11:58.399 Discovery Log Change Notices: Not Supported 00:11:58.399 Controller Attributes 00:11:58.399 128-bit Host Identifier: Supported 00:11:58.399 Non-Operational Permissive Mode: Not Supported 00:11:58.399 NVM Sets: Not Supported 00:11:58.399 Read Recovery Levels: Not Supported 00:11:58.399 Endurance Groups: Not Supported 00:11:58.399 Predictable Latency Mode: Not Supported 00:11:58.399 Traffic Based Keep ALive: Not Supported 00:11:58.399 Namespace Granularity: Not Supported 00:11:58.399 SQ Associations: Not Supported 00:11:58.399 UUID List: Not Supported 00:11:58.399 Multi-Domain Subsystem: Not Supported 00:11:58.399 Fixed Capacity Management: Not Supported 00:11:58.399 Variable Capacity Management: Not Supported 00:11:58.399 Delete Endurance Group: Not Supported 00:11:58.399 Delete NVM Set: Not Supported 00:11:58.399 Extended LBA Formats Supported: Not Supported 00:11:58.399 Flexible Data Placement Supported: Not Supported 00:11:58.399 00:11:58.399 Controller Memory Buffer Support 00:11:58.399 ================================ 00:11:58.399 Supported: No 00:11:58.399 00:11:58.399 Persistent Memory Region Support 00:11:58.399 ================================ 00:11:58.399 Supported: No 00:11:58.399 00:11:58.399 Admin Command Set Attributes 00:11:58.399 ============================ 00:11:58.399 Security Send/Receive: Not Supported 00:11:58.399 Format NVM: Not Supported 00:11:58.399 Firmware Activate/Download: Not Supported 00:11:58.399 Namespace Management: Not Supported 00:11:58.399 Device Self-Test: Not Supported 00:11:58.399 Directives: Not Supported 00:11:58.399 NVMe-MI: Not Supported 00:11:58.399 Virtualization Management: Not Supported 00:11:58.399 Doorbell Buffer Config: Not Supported 00:11:58.399 Get LBA Status Capability: Not Supported 00:11:58.399 Command & Feature Lockdown Capability: Not Supported 00:11:58.399 Abort Command Limit: 4 00:11:58.399 Async Event Request Limit: 4 00:11:58.399 Number of Firmware Slots: N/A 00:11:58.399 Firmware Slot 1 Read-Only: N/A 00:11:58.399 Firmware Activation Without Reset: N/A 00:11:58.399 Multiple Update Detection Support: N/A 00:11:58.399 Firmware Update Granularity: No Information Provided 00:11:58.399 Per-Namespace SMART Log: No 00:11:58.399 Asymmetric Namespace Access Log Page: Not Supported 00:11:58.399 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:58.399 Command Effects Log Page: Supported 00:11:58.399 Get Log Page Extended Data: Supported 00:11:58.399 Telemetry Log Pages: Not Supported 00:11:58.399 Persistent Event Log Pages: Not Supported 00:11:58.399 Supported Log Pages Log Page: May Support 00:11:58.399 Commands Supported & Effects Log Page: Not Supported 00:11:58.399 Feature Identifiers & Effects Log Page:May Support 00:11:58.399 NVMe-MI Commands & Effects Log Page: May Support 00:11:58.399 Data Area 4 for Telemetry Log: Not Supported 00:11:58.399 Error Log Page Entries Supported: 128 00:11:58.399 Keep Alive: Supported 00:11:58.399 Keep Alive Granularity: 10000 ms 00:11:58.399 00:11:58.399 NVM Command Set Attributes 00:11:58.399 ========================== 00:11:58.399 Submission Queue Entry Size 00:11:58.399 Max: 64 00:11:58.399 Min: 64 00:11:58.399 Completion Queue Entry Size 00:11:58.399 Max: 16 00:11:58.399 Min: 16 00:11:58.399 Number of Namespaces: 32 00:11:58.399 Compare Command: Supported 00:11:58.399 Write Uncorrectable Command: Not Supported 00:11:58.399 Dataset Management Command: Supported 00:11:58.399 Write Zeroes Command: Supported 00:11:58.399 Set Features Save Field: Not Supported 00:11:58.400 Reservations: Not Supported 00:11:58.400 Timestamp: Not Supported 00:11:58.400 Copy: Supported 00:11:58.400 Volatile Write Cache: Present 00:11:58.400 Atomic Write Unit (Normal): 1 00:11:58.400 Atomic Write Unit (PFail): 1 00:11:58.400 Atomic Compare & Write Unit: 1 00:11:58.400 Fused Compare & Write: Supported 00:11:58.400 Scatter-Gather List 00:11:58.400 SGL Command Set: Supported (Dword aligned) 00:11:58.400 SGL Keyed: Not Supported 00:11:58.400 SGL Bit Bucket Descriptor: Not Supported 00:11:58.400 SGL Metadata Pointer: Not Supported 00:11:58.400 Oversized SGL: Not Supported 00:11:58.400 SGL Metadata Address: Not Supported 00:11:58.400 SGL Offset: Not Supported 00:11:58.400 Transport SGL Data Block: Not Supported 00:11:58.400 Replay Protected Memory Block: Not Supported 00:11:58.400 00:11:58.400 Firmware Slot Information 00:11:58.400 ========================= 00:11:58.400 Active slot: 1 00:11:58.400 Slot 1 Firmware Revision: 24.05 00:11:58.400 00:11:58.400 00:11:58.400 Commands Supported and Effects 00:11:58.400 ============================== 00:11:58.400 Admin Commands 00:11:58.400 -------------- 00:11:58.400 Get Log Page (02h): Supported 00:11:58.400 Identify (06h): Supported 00:11:58.400 Abort (08h): Supported 00:11:58.400 Set Features (09h): Supported 00:11:58.400 Get Features (0Ah): Supported 00:11:58.400 Asynchronous Event Request (0Ch): Supported 00:11:58.400 Keep Alive (18h): Supported 00:11:58.400 I/O Commands 00:11:58.400 ------------ 00:11:58.400 Flush (00h): Supported LBA-Change 00:11:58.400 Write (01h): Supported LBA-Change 00:11:58.400 Read (02h): Supported 00:11:58.400 Compare (05h): Supported 00:11:58.400 Write Zeroes (08h): Supported LBA-Change 00:11:58.400 Dataset Management (09h): Supported LBA-Change 00:11:58.400 Copy (19h): Supported LBA-Change 00:11:58.400 Unknown (79h): Supported LBA-Change 00:11:58.400 Unknown (7Ah): Supported 00:11:58.400 00:11:58.400 Error Log 00:11:58.400 ========= 00:11:58.400 00:11:58.400 Arbitration 00:11:58.400 =========== 00:11:58.400 Arbitration Burst: 1 00:11:58.400 00:11:58.400 Power Management 00:11:58.400 ================ 00:11:58.400 Number of Power States: 1 00:11:58.400 Current Power State: Power State #0 00:11:58.400 Power State #0: 00:11:58.400 Max Power: 0.00 W 00:11:58.400 Non-Operational State: Operational 00:11:58.400 Entry Latency: Not Reported 00:11:58.400 Exit Latency: Not Reported 00:11:58.400 Relative Read Throughput: 0 00:11:58.400 Relative Read Latency: 0 00:11:58.400 Relative Write Throughput: 0 00:11:58.400 Relative Write Latency: 0 00:11:58.400 Idle Power: Not Reported 00:11:58.400 Active Power: Not Reported 00:11:58.400 Non-Operational Permissive Mode: Not Supported 00:11:58.400 00:11:58.400 Health Information 00:11:58.400 ================== 00:11:58.400 Critical Warnings: 00:11:58.400 Available Spare Space: OK 00:11:58.400 Temperature: OK 00:11:58.400 Device Reliability: OK 00:11:58.400 Read Only: No 00:11:58.400 Volatile Memory Backup: OK 00:11:58.400 Current Temperature: 0 Kelvin (-2[2024-04-19 04:03:12.811517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:58.400 [2024-04-19 04:03:12.819351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:58.400 [2024-04-19 04:03:12.819385] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:58.400 [2024-04-19 04:03:12.819397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.400 [2024-04-19 04:03:12.819406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.400 [2024-04-19 04:03:12.819417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.400 [2024-04-19 04:03:12.819425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.400 [2024-04-19 04:03:12.819481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:58.400 [2024-04-19 04:03:12.819496] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:58.400 [2024-04-19 04:03:12.820494] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.400 [2024-04-19 04:03:12.820554] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:58.400 [2024-04-19 04:03:12.820563] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:58.400 [2024-04-19 04:03:12.821502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:58.400 [2024-04-19 04:03:12.821517] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:58.400 [2024-04-19 04:03:12.821575] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:58.400 [2024-04-19 04:03:12.823037] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:58.400 73 Celsius) 00:11:58.400 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:58.400 Available Spare: 0% 00:11:58.400 Available Spare Threshold: 0% 00:11:58.400 Life Percentage Used: 0% 00:11:58.400 Data Units Read: 0 00:11:58.400 Data Units Written: 0 00:11:58.400 Host Read Commands: 0 00:11:58.400 Host Write Commands: 0 00:11:58.400 Controller Busy Time: 0 minutes 00:11:58.400 Power Cycles: 0 00:11:58.400 Power On Hours: 0 hours 00:11:58.400 Unsafe Shutdowns: 0 00:11:58.400 Unrecoverable Media Errors: 0 00:11:58.400 Lifetime Error Log Entries: 0 00:11:58.400 Warning Temperature Time: 0 minutes 00:11:58.400 Critical Temperature Time: 0 minutes 00:11:58.400 00:11:58.400 Number of Queues 00:11:58.400 ================ 00:11:58.400 Number of I/O Submission Queues: 127 00:11:58.400 Number of I/O Completion Queues: 127 00:11:58.400 00:11:58.400 Active Namespaces 00:11:58.400 ================= 00:11:58.400 Namespace ID:1 00:11:58.400 Error Recovery Timeout: Unlimited 00:11:58.400 Command Set Identifier: NVM (00h) 00:11:58.400 Deallocate: Supported 00:11:58.400 Deallocated/Unwritten Error: Not Supported 00:11:58.400 Deallocated Read Value: Unknown 00:11:58.400 Deallocate in Write Zeroes: Not Supported 00:11:58.400 Deallocated Guard Field: 0xFFFF 00:11:58.400 Flush: Supported 00:11:58.400 Reservation: Supported 00:11:58.400 Namespace Sharing Capabilities: Multiple Controllers 00:11:58.400 Size (in LBAs): 131072 (0GiB) 00:11:58.400 Capacity (in LBAs): 131072 (0GiB) 00:11:58.400 Utilization (in LBAs): 131072 (0GiB) 00:11:58.400 NGUID: ED9A97D017A54EC993002671B6B6927A 00:11:58.400 UUID: ed9a97d0-17a5-4ec9-9300-2671b6b6927a 00:11:58.400 Thin Provisioning: Not Supported 00:11:58.400 Per-NS Atomic Units: Yes 00:11:58.400 Atomic Boundary Size (Normal): 0 00:11:58.400 Atomic Boundary Size (PFail): 0 00:11:58.400 Atomic Boundary Offset: 0 00:11:58.400 Maximum Single Source Range Length: 65535 00:11:58.400 Maximum Copy Length: 65535 00:11:58.400 Maximum Source Range Count: 1 00:11:58.400 NGUID/EUI64 Never Reused: No 00:11:58.400 Namespace Write Protected: No 00:11:58.400 Number of LBA Formats: 1 00:11:58.400 Current LBA Format: LBA Format #00 00:11:58.400 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.400 00:11:58.400 04:03:12 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:58.400 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.657 [2024-04-19 04:03:13.072531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:03.924 [2024-04-19 04:03:18.178615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:03.924 Initializing NVMe Controllers 00:12:03.924 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:03.924 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:03.924 Initialization complete. Launching workers. 00:12:03.924 ======================================================== 00:12:03.924 Latency(us) 00:12:03.924 Device Information : IOPS MiB/s Average min max 00:12:03.924 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 41987.30 164.01 3047.51 943.75 5978.33 00:12:03.924 ======================================================== 00:12:03.924 Total : 41987.30 164.01 3047.51 943.75 5978.33 00:12:03.924 00:12:03.924 04:03:18 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:03.924 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.924 [2024-04-19 04:03:18.435490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:09.190 [2024-04-19 04:03:23.456508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:09.190 Initializing NVMe Controllers 00:12:09.190 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:09.190 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:09.190 Initialization complete. Launching workers. 00:12:09.190 ======================================================== 00:12:09.190 Latency(us) 00:12:09.190 Device Information : IOPS MiB/s Average min max 00:12:09.190 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24589.00 96.05 5208.67 1421.24 8656.56 00:12:09.190 ======================================================== 00:12:09.190 Total : 24589.00 96.05 5208.67 1421.24 8656.56 00:12:09.190 00:12:09.190 04:03:23 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:09.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.190 [2024-04-19 04:03:23.697830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:14.456 [2024-04-19 04:03:28.838445] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:14.456 Initializing NVMe Controllers 00:12:14.456 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.456 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:14.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:14.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:14.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:14.456 Initialization complete. Launching workers. 00:12:14.456 Starting thread on core 2 00:12:14.456 Starting thread on core 3 00:12:14.456 Starting thread on core 1 00:12:14.456 04:03:28 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:14.456 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.715 [2024-04-19 04:03:29.164662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.027 [2024-04-19 04:03:32.219204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.027 Initializing NVMe Controllers 00:12:18.027 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.027 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.027 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:18.027 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:18.027 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:18.027 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:18.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:18.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:18.027 Initialization complete. Launching workers. 00:12:18.027 Starting thread on core 1 with urgent priority queue 00:12:18.027 Starting thread on core 2 with urgent priority queue 00:12:18.027 Starting thread on core 3 with urgent priority queue 00:12:18.027 Starting thread on core 0 with urgent priority queue 00:12:18.027 SPDK bdev Controller (SPDK2 ) core 0: 7940.33 IO/s 12.59 secs/100000 ios 00:12:18.027 SPDK bdev Controller (SPDK2 ) core 1: 7216.33 IO/s 13.86 secs/100000 ios 00:12:18.027 SPDK bdev Controller (SPDK2 ) core 2: 7107.67 IO/s 14.07 secs/100000 ios 00:12:18.027 SPDK bdev Controller (SPDK2 ) core 3: 7200.00 IO/s 13.89 secs/100000 ios 00:12:18.027 ======================================================== 00:12:18.027 00:12:18.027 04:03:32 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.027 [2024-04-19 04:03:32.545850] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.286 [2024-04-19 04:03:32.555921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.286 Initializing NVMe Controllers 00:12:18.286 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.286 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.286 Namespace ID: 1 size: 0GB 00:12:18.286 Initialization complete. 00:12:18.286 INFO: using host memory buffer for IO 00:12:18.286 Hello world! 00:12:18.286 04:03:32 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.545 [2024-04-19 04:03:32.868204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:19.481 Initializing NVMe Controllers 00:12:19.481 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.481 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:19.481 Initialization complete. Launching workers. 00:12:19.481 submit (in ns) avg, min, max = 9025.4, 4500.0, 4003273.6 00:12:19.481 complete (in ns) avg, min, max = 28850.9, 2668.2, 6991293.6 00:12:19.481 00:12:19.481 Submit histogram 00:12:19.481 ================ 00:12:19.481 Range in us Cumulative Count 00:12:19.481 4.480 - 4.509: 0.0410% ( 5) 00:12:19.481 4.509 - 4.538: 1.3191% ( 156) 00:12:19.481 4.538 - 4.567: 3.2528% ( 236) 00:12:19.481 4.567 - 4.596: 5.8091% ( 312) 00:12:19.481 4.596 - 4.625: 10.7415% ( 602) 00:12:19.481 4.625 - 4.655: 22.1057% ( 1387) 00:12:19.481 4.655 - 4.684: 32.7653% ( 1301) 00:12:19.481 4.684 - 4.713: 44.5473% ( 1438) 00:12:19.481 4.713 - 4.742: 56.1573% ( 1417) 00:12:19.481 4.742 - 4.771: 65.7599% ( 1172) 00:12:19.481 4.771 - 4.800: 74.8791% ( 1113) 00:12:19.481 4.800 - 4.829: 80.2376% ( 654) 00:12:19.481 4.829 - 4.858: 84.2524% ( 490) 00:12:19.481 4.858 - 4.887: 86.5137% ( 276) 00:12:19.481 4.887 - 4.916: 88.5211% ( 245) 00:12:19.481 4.916 - 4.945: 90.1844% ( 203) 00:12:19.481 4.945 - 4.975: 92.1999% ( 246) 00:12:19.481 4.975 - 5.004: 93.9287% ( 211) 00:12:19.481 5.004 - 5.033: 95.6411% ( 209) 00:12:19.481 5.033 - 5.062: 96.8374% ( 146) 00:12:19.481 5.062 - 5.091: 97.8288% ( 121) 00:12:19.481 5.091 - 5.120: 98.6153% ( 96) 00:12:19.481 5.120 - 5.149: 99.0987% ( 59) 00:12:19.481 5.149 - 5.178: 99.2708% ( 21) 00:12:19.481 5.178 - 5.207: 99.3773% ( 13) 00:12:19.481 5.207 - 5.236: 99.4183% ( 5) 00:12:19.481 5.236 - 5.265: 99.4429% ( 3) 00:12:19.481 5.265 - 5.295: 99.4510% ( 1) 00:12:19.481 5.295 - 5.324: 99.4592% ( 1) 00:12:19.481 5.324 - 5.353: 99.4674% ( 1) 00:12:19.481 5.353 - 5.382: 99.4756% ( 1) 00:12:19.481 5.615 - 5.644: 99.4838% ( 1) 00:12:19.481 6.807 - 6.836: 99.4920% ( 1) 00:12:19.481 8.029 - 8.087: 99.5002% ( 1) 00:12:19.481 8.204 - 8.262: 99.5084% ( 1) 00:12:19.481 8.320 - 8.378: 99.5166% ( 1) 00:12:19.481 8.378 - 8.436: 99.5248% ( 1) 00:12:19.481 8.495 - 8.553: 99.5412% ( 2) 00:12:19.481 8.553 - 8.611: 99.5576% ( 2) 00:12:19.481 8.611 - 8.669: 99.5658% ( 1) 00:12:19.481 8.669 - 8.727: 99.5821% ( 2) 00:12:19.481 8.727 - 8.785: 99.5903% ( 1) 00:12:19.481 8.785 - 8.844: 99.5985% ( 1) 00:12:19.481 8.902 - 8.960: 99.6149% ( 2) 00:12:19.481 8.960 - 9.018: 99.6231% ( 1) 00:12:19.481 9.018 - 9.076: 99.6395% ( 2) 00:12:19.481 9.076 - 9.135: 99.6477% ( 1) 00:12:19.481 9.193 - 9.251: 99.6559% ( 1) 00:12:19.481 9.367 - 9.425: 99.6641% ( 1) 00:12:19.481 9.658 - 9.716: 99.6887% ( 3) 00:12:19.481 9.716 - 9.775: 99.6968% ( 1) 00:12:19.481 9.891 - 9.949: 99.7132% ( 2) 00:12:19.481 9.949 - 10.007: 99.7214% ( 1) 00:12:19.481 10.007 - 10.065: 99.7378% ( 2) 00:12:19.481 10.065 - 10.124: 99.7460% ( 1) 00:12:19.481 10.124 - 10.182: 99.7542% ( 1) 00:12:19.481 10.182 - 10.240: 99.7624% ( 1) 00:12:19.481 10.240 - 10.298: 99.7706% ( 1) 00:12:19.481 10.298 - 10.356: 99.7788% ( 1) 00:12:19.481 10.356 - 10.415: 99.7870% ( 1) 00:12:19.481 10.531 - 10.589: 99.8034% ( 2) 00:12:19.481 10.589 - 10.647: 99.8197% ( 2) 00:12:19.481 10.764 - 10.822: 99.8361% ( 2) 00:12:19.481 10.822 - 10.880: 99.8443% ( 1) 00:12:19.481 10.996 - 11.055: 99.8607% ( 2) 00:12:19.481 11.927 - 11.985: 99.8689% ( 1) 00:12:19.481 12.276 - 12.335: 99.8771% ( 1) 00:12:19.481 12.567 - 12.625: 99.8853% ( 1) 00:12:19.481 17.455 - 17.571: 99.8935% ( 1) 00:12:19.481 3991.738 - 4021.527: 100.0000% ( 13) 00:12:19.481 00:12:19.481 Complete histogram 00:12:19.481 ================== 00:12:19.481 Range in us Cumulative Count 00:12:19.481 2.662 - 2.676: 0.2376% ( 29) 00:12:19.481 2.676 - 2.691: 10.0205% ( 1194) 00:12:19.481 2.691 - 2.705: 43.9902% ( 4146) 00:12:19.481 2.705 - 2.720: 59.6231% ( 1908) 00:12:19.481 2.720 - 2.735: 63.0807% ( 422) 00:12:19.481 2.735 - 2.749: 68.3982% ( 649) 00:12:19.481 2.749 - 2.764: 81.0242% ( 1541) 00:12:19.481 2.764 - [2024-04-19 04:03:33.967795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:19.740 2.778: 91.3232% ( 1257) 00:12:19.740 2.778 - 2.793: 95.0758% ( 458) 00:12:19.740 2.793 - 2.807: 96.9275% ( 226) 00:12:19.740 2.807 - 2.822: 97.8124% ( 108) 00:12:19.740 2.822 - 2.836: 98.2302% ( 51) 00:12:19.740 2.836 - 2.851: 98.5907% ( 44) 00:12:19.740 2.851 - 2.865: 98.8038% ( 26) 00:12:19.740 2.865 - 2.880: 98.8939% ( 11) 00:12:19.740 2.880 - 2.895: 98.9103% ( 2) 00:12:19.740 2.895 - 2.909: 98.9431% ( 4) 00:12:19.740 2.909 - 2.924: 98.9758% ( 4) 00:12:19.740 2.924 - 2.938: 99.0086% ( 4) 00:12:19.740 2.938 - 2.953: 99.0332% ( 3) 00:12:19.740 2.967 - 2.982: 99.0414% ( 1) 00:12:19.740 2.982 - 2.996: 99.0660% ( 3) 00:12:19.740 2.996 - 3.011: 99.0741% ( 1) 00:12:19.740 3.098 - 3.113: 99.0905% ( 2) 00:12:19.740 6.138 - 6.167: 99.0987% ( 1) 00:12:19.740 6.458 - 6.487: 99.1069% ( 1) 00:12:19.740 6.516 - 6.545: 99.1233% ( 2) 00:12:19.740 6.545 - 6.575: 99.1315% ( 1) 00:12:19.740 6.575 - 6.604: 99.1397% ( 1) 00:12:19.740 6.604 - 6.633: 99.1479% ( 1) 00:12:19.740 6.807 - 6.836: 99.1561% ( 1) 00:12:19.740 6.836 - 6.865: 99.1643% ( 1) 00:12:19.740 6.865 - 6.895: 99.1807% ( 2) 00:12:19.740 7.040 - 7.069: 99.1889% ( 1) 00:12:19.740 7.098 - 7.127: 99.1971% ( 1) 00:12:19.740 7.156 - 7.185: 99.2052% ( 1) 00:12:19.740 7.331 - 7.360: 99.2134% ( 1) 00:12:19.740 7.389 - 7.418: 99.2216% ( 1) 00:12:19.740 7.418 - 7.447: 99.2298% ( 1) 00:12:19.740 7.505 - 7.564: 99.2462% ( 2) 00:12:19.740 7.680 - 7.738: 99.2544% ( 1) 00:12:19.740 7.738 - 7.796: 99.2626% ( 1) 00:12:19.740 7.913 - 7.971: 99.2708% ( 1) 00:12:19.740 8.204 - 8.262: 99.2790% ( 1) 00:12:19.740 8.262 - 8.320: 99.3118% ( 4) 00:12:19.740 9.076 - 9.135: 99.3281% ( 2) 00:12:19.740 9.309 - 9.367: 99.3363% ( 1) 00:12:19.740 9.658 - 9.716: 99.3445% ( 1) 00:12:19.740 10.589 - 10.647: 99.3527% ( 1) 00:12:19.740 3991.738 - 4021.527: 99.9918% ( 78) 00:12:19.740 6970.647 - 7000.436: 100.0000% ( 1) 00:12:19.740 00:12:19.740 04:03:34 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:19.740 04:03:34 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:19.740 04:03:34 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:19.740 04:03:34 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:19.740 04:03:34 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:19.740 [ 00:12:19.740 { 00:12:19.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:19.740 "subtype": "Discovery", 00:12:19.740 "listen_addresses": [], 00:12:19.740 "allow_any_host": true, 00:12:19.740 "hosts": [] 00:12:19.740 }, 00:12:19.740 { 00:12:19.740 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:19.740 "subtype": "NVMe", 00:12:19.740 "listen_addresses": [ 00:12:19.740 { 00:12:19.740 "transport": "VFIOUSER", 00:12:19.740 "trtype": "VFIOUSER", 00:12:19.740 "adrfam": "IPv4", 00:12:19.741 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:19.741 "trsvcid": "0" 00:12:19.741 } 00:12:19.741 ], 00:12:19.741 "allow_any_host": true, 00:12:19.741 "hosts": [], 00:12:19.741 "serial_number": "SPDK1", 00:12:19.741 "model_number": "SPDK bdev Controller", 00:12:19.741 "max_namespaces": 32, 00:12:19.741 "min_cntlid": 1, 00:12:19.741 "max_cntlid": 65519, 00:12:19.741 "namespaces": [ 00:12:19.741 { 00:12:19.741 "nsid": 1, 00:12:19.741 "bdev_name": "Malloc1", 00:12:19.741 "name": "Malloc1", 00:12:19.741 "nguid": "F257B7F060D64A029C8ED98D9693E56B", 00:12:19.741 "uuid": "f257b7f0-60d6-4a02-9c8e-d98d9693e56b" 00:12:19.741 }, 00:12:19.741 { 00:12:19.741 "nsid": 2, 00:12:19.741 "bdev_name": "Malloc3", 00:12:19.741 "name": "Malloc3", 00:12:19.741 "nguid": "EB9CE296255E4B8C825E1174ECEF1DDE", 00:12:19.741 "uuid": "eb9ce296-255e-4b8c-825e-1174ecef1dde" 00:12:19.741 } 00:12:19.741 ] 00:12:19.741 }, 00:12:19.741 { 00:12:19.741 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:19.741 "subtype": "NVMe", 00:12:19.741 "listen_addresses": [ 00:12:19.741 { 00:12:19.741 "transport": "VFIOUSER", 00:12:19.741 "trtype": "VFIOUSER", 00:12:19.741 "adrfam": "IPv4", 00:12:19.741 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:19.741 "trsvcid": "0" 00:12:19.741 } 00:12:19.741 ], 00:12:19.741 "allow_any_host": true, 00:12:19.741 "hosts": [], 00:12:19.741 "serial_number": "SPDK2", 00:12:19.741 "model_number": "SPDK bdev Controller", 00:12:19.741 "max_namespaces": 32, 00:12:19.741 "min_cntlid": 1, 00:12:19.741 "max_cntlid": 65519, 00:12:19.741 "namespaces": [ 00:12:19.741 { 00:12:19.741 "nsid": 1, 00:12:19.741 "bdev_name": "Malloc2", 00:12:19.741 "name": "Malloc2", 00:12:19.741 "nguid": "ED9A97D017A54EC993002671B6B6927A", 00:12:19.741 "uuid": "ed9a97d0-17a5-4ec9-9300-2671b6b6927a" 00:12:19.741 } 00:12:19.741 ] 00:12:19.741 } 00:12:19.741 ] 00:12:19.999 04:03:34 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:19.999 04:03:34 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3756110 00:12:19.999 04:03:34 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:19.999 04:03:34 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:19.999 04:03:34 -- common/autotest_common.sh@1251 -- # local i=0 00:12:19.999 04:03:34 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:19.999 04:03:34 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:19.999 04:03:34 -- common/autotest_common.sh@1262 -- # return 0 00:12:20.000 04:03:34 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:20.000 04:03:34 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:20.000 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.000 Malloc4 00:12:20.000 [2024-04-19 04:03:34.445419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.000 04:03:34 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:20.258 [2024-04-19 04:03:34.702293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.258 04:03:34 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.258 Asynchronous Event Request test 00:12:20.258 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.258 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.258 Registering asynchronous event callbacks... 00:12:20.258 Starting namespace attribute notice tests for all controllers... 00:12:20.258 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:20.258 aer_cb - Changed Namespace 00:12:20.258 Cleaning up... 00:12:20.517 [ 00:12:20.517 { 00:12:20.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.517 "subtype": "Discovery", 00:12:20.517 "listen_addresses": [], 00:12:20.517 "allow_any_host": true, 00:12:20.517 "hosts": [] 00:12:20.517 }, 00:12:20.517 { 00:12:20.517 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.517 "subtype": "NVMe", 00:12:20.517 "listen_addresses": [ 00:12:20.517 { 00:12:20.517 "transport": "VFIOUSER", 00:12:20.517 "trtype": "VFIOUSER", 00:12:20.517 "adrfam": "IPv4", 00:12:20.517 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.517 "trsvcid": "0" 00:12:20.517 } 00:12:20.517 ], 00:12:20.517 "allow_any_host": true, 00:12:20.517 "hosts": [], 00:12:20.517 "serial_number": "SPDK1", 00:12:20.517 "model_number": "SPDK bdev Controller", 00:12:20.517 "max_namespaces": 32, 00:12:20.517 "min_cntlid": 1, 00:12:20.517 "max_cntlid": 65519, 00:12:20.517 "namespaces": [ 00:12:20.517 { 00:12:20.517 "nsid": 1, 00:12:20.517 "bdev_name": "Malloc1", 00:12:20.517 "name": "Malloc1", 00:12:20.517 "nguid": "F257B7F060D64A029C8ED98D9693E56B", 00:12:20.517 "uuid": "f257b7f0-60d6-4a02-9c8e-d98d9693e56b" 00:12:20.517 }, 00:12:20.517 { 00:12:20.517 "nsid": 2, 00:12:20.517 "bdev_name": "Malloc3", 00:12:20.517 "name": "Malloc3", 00:12:20.517 "nguid": "EB9CE296255E4B8C825E1174ECEF1DDE", 00:12:20.517 "uuid": "eb9ce296-255e-4b8c-825e-1174ecef1dde" 00:12:20.517 } 00:12:20.517 ] 00:12:20.517 }, 00:12:20.517 { 00:12:20.517 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.517 "subtype": "NVMe", 00:12:20.517 "listen_addresses": [ 00:12:20.517 { 00:12:20.517 "transport": "VFIOUSER", 00:12:20.517 "trtype": "VFIOUSER", 00:12:20.517 "adrfam": "IPv4", 00:12:20.517 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.517 "trsvcid": "0" 00:12:20.517 } 00:12:20.517 ], 00:12:20.517 "allow_any_host": true, 00:12:20.517 "hosts": [], 00:12:20.517 "serial_number": "SPDK2", 00:12:20.517 "model_number": "SPDK bdev Controller", 00:12:20.517 "max_namespaces": 32, 00:12:20.517 "min_cntlid": 1, 00:12:20.517 "max_cntlid": 65519, 00:12:20.517 "namespaces": [ 00:12:20.517 { 00:12:20.517 "nsid": 1, 00:12:20.517 "bdev_name": "Malloc2", 00:12:20.517 "name": "Malloc2", 00:12:20.517 "nguid": "ED9A97D017A54EC993002671B6B6927A", 00:12:20.517 "uuid": "ed9a97d0-17a5-4ec9-9300-2671b6b6927a" 00:12:20.517 }, 00:12:20.517 { 00:12:20.517 "nsid": 2, 00:12:20.517 "bdev_name": "Malloc4", 00:12:20.517 "name": "Malloc4", 00:12:20.517 "nguid": "E5A847631A60467386504C8103A640BF", 00:12:20.517 "uuid": "e5a84763-1a60-4673-8650-4c8103a640bf" 00:12:20.517 } 00:12:20.517 ] 00:12:20.517 } 00:12:20.517 ] 00:12:20.517 04:03:34 -- target/nvmf_vfio_user.sh@44 -- # wait 3756110 00:12:20.517 04:03:34 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:20.517 04:03:34 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3747167 00:12:20.517 04:03:34 -- common/autotest_common.sh@936 -- # '[' -z 3747167 ']' 00:12:20.517 04:03:34 -- common/autotest_common.sh@940 -- # kill -0 3747167 00:12:20.517 04:03:34 -- common/autotest_common.sh@941 -- # uname 00:12:20.517 04:03:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:20.517 04:03:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3747167 00:12:20.517 04:03:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:20.517 04:03:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:20.517 04:03:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3747167' 00:12:20.517 killing process with pid 3747167 00:12:20.517 04:03:35 -- common/autotest_common.sh@955 -- # kill 3747167 00:12:20.517 [2024-04-19 04:03:35.026559] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:20.517 04:03:35 -- common/autotest_common.sh@960 -- # wait 3747167 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3756373 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3756373' 00:12:21.084 Process pid: 3756373 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.084 04:03:35 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3756373 00:12:21.084 04:03:35 -- common/autotest_common.sh@817 -- # '[' -z 3756373 ']' 00:12:21.084 04:03:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.084 04:03:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:21.084 04:03:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.084 04:03:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:21.084 04:03:35 -- common/autotest_common.sh@10 -- # set +x 00:12:21.084 [2024-04-19 04:03:35.380244] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:21.084 [2024-04-19 04:03:35.381537] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:21.084 [2024-04-19 04:03:35.381584] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.084 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.084 [2024-04-19 04:03:35.467090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.084 [2024-04-19 04:03:35.555910] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.084 [2024-04-19 04:03:35.555954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.084 [2024-04-19 04:03:35.555965] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.084 [2024-04-19 04:03:35.555973] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.084 [2024-04-19 04:03:35.555981] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.084 [2024-04-19 04:03:35.556032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.084 [2024-04-19 04:03:35.556146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.084 [2024-04-19 04:03:35.556263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.084 [2024-04-19 04:03:35.556262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.343 [2024-04-19 04:03:35.644020] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:21.343 [2024-04-19 04:03:35.644260] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:21.343 [2024-04-19 04:03:35.644590] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:21.343 [2024-04-19 04:03:35.645086] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:21.343 [2024-04-19 04:03:35.645220] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:21.909 04:03:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:21.909 04:03:36 -- common/autotest_common.sh@850 -- # return 0 00:12:21.909 04:03:36 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:22.843 04:03:37 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:23.102 04:03:37 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:23.102 04:03:37 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:23.102 04:03:37 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.102 04:03:37 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:23.102 04:03:37 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.361 Malloc1 00:12:23.361 04:03:37 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:23.620 04:03:37 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.878 04:03:38 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.137 04:03:38 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.137 04:03:38 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.137 04:03:38 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.395 Malloc2 00:12:24.395 04:03:38 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.654 04:03:39 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:24.912 04:03:39 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:25.171 04:03:39 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:25.171 04:03:39 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3756373 00:12:25.171 04:03:39 -- common/autotest_common.sh@936 -- # '[' -z 3756373 ']' 00:12:25.171 04:03:39 -- common/autotest_common.sh@940 -- # kill -0 3756373 00:12:25.171 04:03:39 -- common/autotest_common.sh@941 -- # uname 00:12:25.171 04:03:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.171 04:03:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3756373 00:12:25.171 04:03:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:25.171 04:03:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:25.171 04:03:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3756373' 00:12:25.171 killing process with pid 3756373 00:12:25.171 04:03:39 -- common/autotest_common.sh@955 -- # kill 3756373 00:12:25.171 04:03:39 -- common/autotest_common.sh@960 -- # wait 3756373 00:12:25.430 04:03:39 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:25.430 04:03:39 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.430 00:12:25.430 real 0m52.952s 00:12:25.430 user 3m29.263s 00:12:25.430 sys 0m3.989s 00:12:25.430 04:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.430 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.430 ************************************ 00:12:25.430 END TEST nvmf_vfio_user 00:12:25.430 ************************************ 00:12:25.430 04:03:39 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.430 04:03:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:25.430 04:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.430 04:03:39 -- common/autotest_common.sh@10 -- # set +x 00:12:25.688 ************************************ 00:12:25.688 START TEST nvmf_vfio_user_nvme_compliance 00:12:25.688 ************************************ 00:12:25.688 04:03:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.688 * Looking for test storage... 00:12:25.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:25.688 04:03:40 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.688 04:03:40 -- nvmf/common.sh@7 -- # uname -s 00:12:25.688 04:03:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.688 04:03:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.688 04:03:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.688 04:03:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.688 04:03:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.688 04:03:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.688 04:03:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.689 04:03:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.689 04:03:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.689 04:03:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.689 04:03:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:25.689 04:03:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:25.689 04:03:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.689 04:03:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.689 04:03:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.689 04:03:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.689 04:03:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.689 04:03:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.689 04:03:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.689 04:03:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.689 04:03:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.689 04:03:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.689 04:03:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.689 04:03:40 -- paths/export.sh@5 -- # export PATH 00:12:25.689 04:03:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.689 04:03:40 -- nvmf/common.sh@47 -- # : 0 00:12:25.689 04:03:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.689 04:03:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.689 04:03:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.689 04:03:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.689 04:03:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.689 04:03:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.689 04:03:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.689 04:03:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.689 04:03:40 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.689 04:03:40 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.689 04:03:40 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:25.689 04:03:40 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:25.689 04:03:40 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:25.689 04:03:40 -- compliance/compliance.sh@20 -- # nvmfpid=3757248 00:12:25.689 04:03:40 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3757248' 00:12:25.689 Process pid: 3757248 00:12:25.689 04:03:40 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.689 04:03:40 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:25.689 04:03:40 -- compliance/compliance.sh@24 -- # waitforlisten 3757248 00:12:25.689 04:03:40 -- common/autotest_common.sh@817 -- # '[' -z 3757248 ']' 00:12:25.689 04:03:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.689 04:03:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:25.689 04:03:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.689 04:03:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:25.689 04:03:40 -- common/autotest_common.sh@10 -- # set +x 00:12:25.689 [2024-04-19 04:03:40.164160] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:12:25.689 [2024-04-19 04:03:40.164210] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.689 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.947 [2024-04-19 04:03:40.234562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.947 [2024-04-19 04:03:40.325113] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.947 [2024-04-19 04:03:40.325158] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.947 [2024-04-19 04:03:40.325169] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.947 [2024-04-19 04:03:40.325178] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.947 [2024-04-19 04:03:40.325186] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.947 [2024-04-19 04:03:40.325239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.947 [2024-04-19 04:03:40.325371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.947 [2024-04-19 04:03:40.325378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.947 04:03:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.947 04:03:40 -- common/autotest_common.sh@850 -- # return 0 00:12:25.947 04:03:40 -- compliance/compliance.sh@26 -- # sleep 1 00:12:27.324 04:03:41 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:27.324 04:03:41 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:27.324 04:03:41 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:27.324 04:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.324 04:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.324 04:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.324 04:03:41 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:27.324 04:03:41 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:27.324 04:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.324 04:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.324 malloc0 00:12:27.324 04:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.324 04:03:41 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:27.324 04:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.324 04:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.324 04:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.324 04:03:41 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:27.324 04:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.324 04:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.324 04:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.324 04:03:41 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:27.324 04:03:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.324 04:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.324 04:03:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.324 04:03:41 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:27.324 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.324 00:12:27.324 00:12:27.324 CUnit - A unit testing framework for C - Version 2.1-3 00:12:27.324 http://cunit.sourceforge.net/ 00:12:27.324 00:12:27.324 00:12:27.324 Suite: nvme_compliance 00:12:27.324 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-19 04:03:41.697584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.324 [2024-04-19 04:03:41.699027] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:27.324 [2024-04-19 04:03:41.699047] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:27.324 [2024-04-19 04:03:41.699056] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:27.324 [2024-04-19 04:03:41.701616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.324 passed 00:12:27.324 Test: admin_identify_ctrlr_verify_fused ...[2024-04-19 04:03:41.802306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.324 [2024-04-19 04:03:41.805333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.324 passed 00:12:27.582 Test: admin_identify_ns ...[2024-04-19 04:03:41.905523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.582 [2024-04-19 04:03:41.967365] ctrlr.c:2668:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:27.582 [2024-04-19 04:03:41.975373] ctrlr.c:2668:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:27.582 [2024-04-19 04:03:41.996483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.582 passed 00:12:27.582 Test: admin_get_features_mandatory_features ...[2024-04-19 04:03:42.092329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.582 [2024-04-19 04:03:42.098380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.841 passed 00:12:27.841 Test: admin_get_features_optional_features ...[2024-04-19 04:03:42.194062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.841 [2024-04-19 04:03:42.197084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.841 passed 00:12:27.841 Test: admin_set_features_number_of_queues ...[2024-04-19 04:03:42.297164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.100 [2024-04-19 04:03:42.402462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.100 passed 00:12:28.100 Test: admin_get_log_page_mandatory_logs ...[2024-04-19 04:03:42.498221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.100 [2024-04-19 04:03:42.501244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.100 passed 00:12:28.100 Test: admin_get_log_page_with_lpo ...[2024-04-19 04:03:42.601395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.358 [2024-04-19 04:03:42.671357] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:28.358 [2024-04-19 04:03:42.684433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.358 passed 00:12:28.358 Test: fabric_property_get ...[2024-04-19 04:03:42.780240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.358 [2024-04-19 04:03:42.781562] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:28.358 [2024-04-19 04:03:42.785286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.358 passed 00:12:28.358 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-19 04:03:42.882925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.358 [2024-04-19 04:03:42.884185] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:28.617 [2024-04-19 04:03:42.886950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.617 passed 00:12:28.617 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-19 04:03:42.983135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.617 [2024-04-19 04:03:43.066356] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.617 [2024-04-19 04:03:43.082358] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.617 [2024-04-19 04:03:43.087448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.617 passed 00:12:28.875 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-19 04:03:43.187094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.875 [2024-04-19 04:03:43.188360] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:28.875 [2024-04-19 04:03:43.191123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.875 passed 00:12:28.875 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-19 04:03:43.289017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.875 [2024-04-19 04:03:43.365366] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:28.875 [2024-04-19 04:03:43.389356] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.875 [2024-04-19 04:03:43.394469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.133 passed 00:12:29.133 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-19 04:03:43.492337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.133 [2024-04-19 04:03:43.493624] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:29.133 [2024-04-19 04:03:43.493647] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:29.133 [2024-04-19 04:03:43.495363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.133 passed 00:12:29.133 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-19 04:03:43.594137] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.391 [2024-04-19 04:03:43.685351] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:29.391 [2024-04-19 04:03:43.693350] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:29.391 [2024-04-19 04:03:43.701354] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:29.391 [2024-04-19 04:03:43.709351] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:29.391 [2024-04-19 04:03:43.738462] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.391 passed 00:12:29.391 Test: admin_create_io_sq_verify_pc ...[2024-04-19 04:03:43.835190] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.391 [2024-04-19 04:03:43.850359] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:29.391 [2024-04-19 04:03:43.868360] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.391 passed 00:12:29.650 Test: admin_create_io_qp_max_qps ...[2024-04-19 04:03:43.969052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.588 [2024-04-19 04:03:45.076356] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:31.155 [2024-04-19 04:03:45.465662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.155 passed 00:12:31.155 Test: admin_create_io_sq_shared_cq ...[2024-04-19 04:03:45.563143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.415 [2024-04-19 04:03:45.689355] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:31.415 [2024-04-19 04:03:45.726428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.415 passed 00:12:31.415 00:12:31.415 Run Summary: Type Total Ran Passed Failed Inactive 00:12:31.415 suites 1 1 n/a 0 0 00:12:31.415 tests 18 18 18 0 0 00:12:31.415 asserts 360 360 360 0 n/a 00:12:31.415 00:12:31.415 Elapsed time = 1.699 seconds 00:12:31.415 04:03:45 -- compliance/compliance.sh@42 -- # killprocess 3757248 00:12:31.415 04:03:45 -- common/autotest_common.sh@936 -- # '[' -z 3757248 ']' 00:12:31.415 04:03:45 -- common/autotest_common.sh@940 -- # kill -0 3757248 00:12:31.415 04:03:45 -- common/autotest_common.sh@941 -- # uname 00:12:31.415 04:03:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.415 04:03:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3757248 00:12:31.415 04:03:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.415 04:03:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.415 04:03:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3757248' 00:12:31.415 killing process with pid 3757248 00:12:31.415 04:03:45 -- common/autotest_common.sh@955 -- # kill 3757248 00:12:31.415 04:03:45 -- common/autotest_common.sh@960 -- # wait 3757248 00:12:31.674 04:03:46 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:31.674 04:03:46 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:31.674 00:12:31.674 real 0m6.088s 00:12:31.674 user 0m17.129s 00:12:31.674 sys 0m0.491s 00:12:31.674 04:03:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.674 04:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:31.674 ************************************ 00:12:31.674 END TEST nvmf_vfio_user_nvme_compliance 00:12:31.674 ************************************ 00:12:31.674 04:03:46 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.674 04:03:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.674 04:03:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.674 04:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:31.933 ************************************ 00:12:31.933 START TEST nvmf_vfio_user_fuzz 00:12:31.933 ************************************ 00:12:31.933 04:03:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.933 * Looking for test storage... 00:12:31.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.933 04:03:46 -- nvmf/common.sh@7 -- # uname -s 00:12:31.933 04:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.933 04:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.933 04:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.933 04:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.933 04:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.933 04:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.933 04:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.933 04:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.933 04:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.933 04:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.933 04:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:31.933 04:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:31.933 04:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.933 04:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.933 04:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.933 04:03:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.933 04:03:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.933 04:03:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.933 04:03:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.933 04:03:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.933 04:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.933 04:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.933 04:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.933 04:03:46 -- paths/export.sh@5 -- # export PATH 00:12:31.933 04:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.933 04:03:46 -- nvmf/common.sh@47 -- # : 0 00:12:31.933 04:03:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.933 04:03:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.933 04:03:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.933 04:03:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.933 04:03:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.933 04:03:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.933 04:03:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.933 04:03:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3758585 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3758585' 00:12:31.933 Process pid: 3758585 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:31.933 04:03:46 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3758585 00:12:31.933 04:03:46 -- common/autotest_common.sh@817 -- # '[' -z 3758585 ']' 00:12:31.933 04:03:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.933 04:03:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.933 04:03:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.933 04:03:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.933 04:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:32.191 04:03:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.191 04:03:46 -- common/autotest_common.sh@850 -- # return 0 00:12:32.191 04:03:46 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:33.567 04:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.567 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 04:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:33.567 04:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.567 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 malloc0 00:12:33.567 04:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:33.567 04:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.567 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 04:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:33.567 04:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.567 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 04:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:33.567 04:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.567 04:03:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.567 04:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:33.567 04:03:47 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:05.743 Fuzzing completed. Shutting down the fuzz application 00:13:05.743 00:13:05.743 Dumping successful admin opcodes: 00:13:05.743 8, 9, 10, 24, 00:13:05.743 Dumping successful io opcodes: 00:13:05.743 0, 00:13:05.743 NS: 0x200003a1ef00 I/O qp, Total commands completed: 745543, total successful commands: 2881, random_seed: 812473856 00:13:05.743 NS: 0x200003a1ef00 admin qp, Total commands completed: 180848, total successful commands: 1459, random_seed: 2485775744 00:13:05.743 04:04:18 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:05.743 04:04:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.743 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:05.743 04:04:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.743 04:04:18 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3758585 00:13:05.743 04:04:18 -- common/autotest_common.sh@936 -- # '[' -z 3758585 ']' 00:13:05.743 04:04:18 -- common/autotest_common.sh@940 -- # kill -0 3758585 00:13:05.743 04:04:18 -- common/autotest_common.sh@941 -- # uname 00:13:05.743 04:04:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.743 04:04:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3758585 00:13:05.743 04:04:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:05.743 04:04:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:05.743 04:04:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3758585' 00:13:05.743 killing process with pid 3758585 00:13:05.743 04:04:18 -- common/autotest_common.sh@955 -- # kill 3758585 00:13:05.743 04:04:18 -- common/autotest_common.sh@960 -- # wait 3758585 00:13:05.743 04:04:18 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:05.743 04:04:18 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:05.743 00:13:05.743 real 0m32.344s 00:13:05.743 user 0m35.940s 00:13:05.743 sys 0m24.463s 00:13:05.743 04:04:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.743 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:05.743 ************************************ 00:13:05.743 END TEST nvmf_vfio_user_fuzz 00:13:05.743 ************************************ 00:13:05.743 04:04:18 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:05.743 04:04:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:05.743 04:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.743 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:05.743 ************************************ 00:13:05.743 START TEST nvmf_host_management 00:13:05.743 ************************************ 00:13:05.743 04:04:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:05.743 * Looking for test storage... 00:13:05.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.743 04:04:18 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.743 04:04:18 -- nvmf/common.sh@7 -- # uname -s 00:13:05.743 04:04:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.743 04:04:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.743 04:04:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.743 04:04:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.743 04:04:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.743 04:04:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.743 04:04:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.743 04:04:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.743 04:04:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.743 04:04:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.743 04:04:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:05.743 04:04:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:05.743 04:04:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.743 04:04:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.743 04:04:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.743 04:04:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.743 04:04:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.743 04:04:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.743 04:04:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.743 04:04:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.743 04:04:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.743 04:04:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.743 04:04:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.743 04:04:18 -- paths/export.sh@5 -- # export PATH 00:13:05.743 04:04:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.743 04:04:18 -- nvmf/common.sh@47 -- # : 0 00:13:05.743 04:04:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.743 04:04:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.743 04:04:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.743 04:04:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.743 04:04:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.743 04:04:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.743 04:04:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.743 04:04:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.744 04:04:18 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.744 04:04:18 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.744 04:04:18 -- target/host_management.sh@105 -- # nvmftestinit 00:13:05.744 04:04:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:05.744 04:04:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.744 04:04:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:05.744 04:04:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:05.744 04:04:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:05.744 04:04:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.744 04:04:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.744 04:04:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.744 04:04:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:05.744 04:04:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:05.744 04:04:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.744 04:04:18 -- common/autotest_common.sh@10 -- # set +x 00:13:09.940 04:04:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.940 04:04:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.940 04:04:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.940 04:04:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.940 04:04:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.940 04:04:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.940 04:04:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.940 04:04:24 -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.940 04:04:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.940 04:04:24 -- nvmf/common.sh@296 -- # e810=() 00:13:09.940 04:04:24 -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.940 04:04:24 -- nvmf/common.sh@297 -- # x722=() 00:13:09.940 04:04:24 -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.940 04:04:24 -- nvmf/common.sh@298 -- # mlx=() 00:13:09.940 04:04:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.940 04:04:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.940 04:04:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.940 04:04:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.940 04:04:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.940 04:04:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.940 04:04:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:09.940 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:09.940 04:04:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.940 04:04:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:09.940 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:09.940 04:04:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.940 04:04:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.940 04:04:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.941 04:04:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.941 04:04:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:09.941 04:04:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.941 04:04:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:09.941 Found net devices under 0000:af:00.0: cvl_0_0 00:13:09.941 04:04:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.941 04:04:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.941 04:04:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.941 04:04:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:09.941 04:04:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.941 04:04:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:09.941 Found net devices under 0000:af:00.1: cvl_0_1 00:13:09.941 04:04:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.941 04:04:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:09.941 04:04:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:09.941 04:04:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:09.941 04:04:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:09.941 04:04:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:09.941 04:04:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.941 04:04:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.941 04:04:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.941 04:04:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.941 04:04:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.941 04:04:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.941 04:04:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.941 04:04:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.941 04:04:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.941 04:04:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.941 04:04:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.941 04:04:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.941 04:04:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.941 04:04:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.941 04:04:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.941 04:04:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.941 04:04:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.941 04:04:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.200 04:04:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.200 04:04:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:10.200 00:13:10.200 --- 10.0.0.2 ping statistics --- 00:13:10.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.200 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:10.200 04:04:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:13:10.200 00:13:10.200 --- 10.0.0.1 ping statistics --- 00:13:10.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.200 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:13:10.200 04:04:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.200 04:04:24 -- nvmf/common.sh@411 -- # return 0 00:13:10.200 04:04:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:10.200 04:04:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.200 04:04:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:10.200 04:04:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:10.200 04:04:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.200 04:04:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:10.200 04:04:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:10.200 04:04:24 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:10.200 04:04:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:10.200 04:04:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.200 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.200 ************************************ 00:13:10.200 START TEST nvmf_host_management 00:13:10.200 ************************************ 00:13:10.200 04:04:24 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:10.200 04:04:24 -- target/host_management.sh@69 -- # starttarget 00:13:10.200 04:04:24 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:10.200 04:04:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:10.200 04:04:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:10.200 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.200 04:04:24 -- nvmf/common.sh@470 -- # nvmfpid=3767536 00:13:10.200 04:04:24 -- nvmf/common.sh@471 -- # waitforlisten 3767536 00:13:10.200 04:04:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:10.200 04:04:24 -- common/autotest_common.sh@817 -- # '[' -z 3767536 ']' 00:13:10.201 04:04:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.201 04:04:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.201 04:04:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.201 04:04:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.201 04:04:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.460 [2024-04-19 04:04:24.739011] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:10.460 [2024-04-19 04:04:24.739067] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.460 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.460 [2024-04-19 04:04:24.819477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.460 [2024-04-19 04:04:24.910911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.460 [2024-04-19 04:04:24.910955] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.460 [2024-04-19 04:04:24.910966] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.460 [2024-04-19 04:04:24.910975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.460 [2024-04-19 04:04:24.910982] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.460 [2024-04-19 04:04:24.911092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.460 [2024-04-19 04:04:24.911217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.460 [2024-04-19 04:04:24.911327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.460 [2024-04-19 04:04:24.911327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.719 04:04:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:10.719 04:04:25 -- common/autotest_common.sh@850 -- # return 0 00:13:10.719 04:04:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:10.719 04:04:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 04:04:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.719 04:04:25 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.719 04:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 [2024-04-19 04:04:25.069524] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.719 04:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.719 04:04:25 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:10.719 04:04:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 04:04:25 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:10.719 04:04:25 -- target/host_management.sh@23 -- # cat 00:13:10.719 04:04:25 -- target/host_management.sh@30 -- # rpc_cmd 00:13:10.719 04:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 Malloc0 00:13:10.719 [2024-04-19 04:04:25.133516] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.719 04:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:10.719 04:04:25 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:10.719 04:04:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 04:04:25 -- target/host_management.sh@73 -- # perfpid=3767577 00:13:10.719 04:04:25 -- target/host_management.sh@74 -- # waitforlisten 3767577 /var/tmp/bdevperf.sock 00:13:10.719 04:04:25 -- common/autotest_common.sh@817 -- # '[' -z 3767577 ']' 00:13:10.719 04:04:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.719 04:04:25 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:10.719 04:04:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.719 04:04:25 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:10.719 04:04:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.719 04:04:25 -- nvmf/common.sh@521 -- # config=() 00:13:10.719 04:04:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.719 04:04:25 -- nvmf/common.sh@521 -- # local subsystem config 00:13:10.719 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:10.719 04:04:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:10.719 04:04:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:10.719 { 00:13:10.719 "params": { 00:13:10.719 "name": "Nvme$subsystem", 00:13:10.719 "trtype": "$TEST_TRANSPORT", 00:13:10.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:10.719 "adrfam": "ipv4", 00:13:10.719 "trsvcid": "$NVMF_PORT", 00:13:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:10.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:10.719 "hdgst": ${hdgst:-false}, 00:13:10.719 "ddgst": ${ddgst:-false} 00:13:10.719 }, 00:13:10.719 "method": "bdev_nvme_attach_controller" 00:13:10.719 } 00:13:10.719 EOF 00:13:10.719 )") 00:13:10.719 04:04:25 -- nvmf/common.sh@543 -- # cat 00:13:10.719 04:04:25 -- nvmf/common.sh@545 -- # jq . 00:13:10.719 04:04:25 -- nvmf/common.sh@546 -- # IFS=, 00:13:10.719 04:04:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:10.719 "params": { 00:13:10.719 "name": "Nvme0", 00:13:10.719 "trtype": "tcp", 00:13:10.719 "traddr": "10.0.0.2", 00:13:10.719 "adrfam": "ipv4", 00:13:10.719 "trsvcid": "4420", 00:13:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:10.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:10.719 "hdgst": false, 00:13:10.719 "ddgst": false 00:13:10.719 }, 00:13:10.719 "method": "bdev_nvme_attach_controller" 00:13:10.719 }' 00:13:10.719 [2024-04-19 04:04:25.226778] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:10.719 [2024-04-19 04:04:25.226833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767577 ] 00:13:10.978 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.978 [2024-04-19 04:04:25.308616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.978 [2024-04-19 04:04:25.393157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.237 Running I/O for 10 seconds... 00:13:11.237 04:04:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:11.237 04:04:25 -- common/autotest_common.sh@850 -- # return 0 00:13:11.237 04:04:25 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:11.237 04:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.237 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.237 04:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.237 04:04:25 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:11.237 04:04:25 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:11.237 04:04:25 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:11.237 04:04:25 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:11.237 04:04:25 -- target/host_management.sh@52 -- # local ret=1 00:13:11.237 04:04:25 -- target/host_management.sh@53 -- # local i 00:13:11.237 04:04:25 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:11.237 04:04:25 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:11.237 04:04:25 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:11.237 04:04:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.237 04:04:25 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.237 04:04:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 04:04:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.496 04:04:25 -- target/host_management.sh@55 -- # read_io_count=67 00:13:11.496 04:04:25 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:11.496 04:04:25 -- target/host_management.sh@62 -- # sleep 0.25 00:13:11.758 04:04:26 -- target/host_management.sh@54 -- # (( i-- )) 00:13:11.758 04:04:26 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:11.758 04:04:26 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:11.758 04:04:26 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.758 04:04:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.758 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:13:11.758 04:04:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.758 04:04:26 -- target/host_management.sh@55 -- # read_io_count=515 00:13:11.758 04:04:26 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:13:11.758 04:04:26 -- target/host_management.sh@59 -- # ret=0 00:13:11.758 04:04:26 -- target/host_management.sh@60 -- # break 00:13:11.758 04:04:26 -- target/host_management.sh@64 -- # return 0 00:13:11.758 04:04:26 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.758 04:04:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.758 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:13:11.758 [2024-04-19 04:04:26.108969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.109175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112abb0 is same with the state(5) to be set 00:13:11.758 04:04:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.758 04:04:26 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.758 04:04:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.758 04:04:26 -- common/autotest_common.sh@10 -- # set +x 00:13:11.758 04:04:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:11.758 04:04:26 -- target/host_management.sh@87 -- # sleep 1 00:13:11.758 [2024-04-19 04:04:26.122799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.758 [2024-04-19 04:04:26.122838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.122851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.758 [2024-04-19 04:04:26.122863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.122875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.758 [2024-04-19 04:04:26.122887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.122897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.758 [2024-04-19 04:04:26.122907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.122918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b2cc0 is same with the state(5) to be set 00:13:11.758 [2024-04-19 04:04:26.123260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.758 [2024-04-19 04:04:26.123480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.758 [2024-04-19 04:04:26.123492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.123979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.123989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.759 [2024-04-19 04:04:26.124369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.759 [2024-04-19 04:04:26.124382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.760 [2024-04-19 04:04:26.124812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.760 [2024-04-19 04:04:26.124881] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17c3b90 was disconnected and freed. reset controller. 00:13:11.760 [2024-04-19 04:04:26.126217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:11.760 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:11.760 00:13:11.760 Latency(us) 00:13:11.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.760 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:11.760 Job: Nvme0n1 ended in about 0.45 seconds with error 00:13:11.760 Verification LBA range: start 0x0 length 0x400 00:13:11.760 Nvme0n1 : 0.45 1291.74 80.73 143.53 0.00 43070.56 2442.71 40036.54 00:13:11.760 =================================================================================================================== 00:13:11.760 Total : 1291.74 80.73 143.53 0.00 43070.56 2442.71 40036.54 00:13:11.760 [2024-04-19 04:04:26.128529] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.760 [2024-04-19 04:04:26.128549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b2cc0 (9): Bad file descriptor 00:13:11.760 [2024-04-19 04:04:26.139290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:12.698 04:04:27 -- target/host_management.sh@91 -- # kill -9 3767577 00:13:12.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3767577) - No such process 00:13:12.698 04:04:27 -- target/host_management.sh@91 -- # true 00:13:12.698 04:04:27 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:12.698 04:04:27 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:12.698 04:04:27 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:12.698 04:04:27 -- nvmf/common.sh@521 -- # config=() 00:13:12.698 04:04:27 -- nvmf/common.sh@521 -- # local subsystem config 00:13:12.698 04:04:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:12.698 04:04:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:12.698 { 00:13:12.698 "params": { 00:13:12.698 "name": "Nvme$subsystem", 00:13:12.698 "trtype": "$TEST_TRANSPORT", 00:13:12.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:12.698 "adrfam": "ipv4", 00:13:12.698 "trsvcid": "$NVMF_PORT", 00:13:12.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:12.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:12.698 "hdgst": ${hdgst:-false}, 00:13:12.698 "ddgst": ${ddgst:-false} 00:13:12.698 }, 00:13:12.698 "method": "bdev_nvme_attach_controller" 00:13:12.698 } 00:13:12.698 EOF 00:13:12.698 )") 00:13:12.698 04:04:27 -- nvmf/common.sh@543 -- # cat 00:13:12.698 04:04:27 -- nvmf/common.sh@545 -- # jq . 00:13:12.698 04:04:27 -- nvmf/common.sh@546 -- # IFS=, 00:13:12.698 04:04:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:12.698 "params": { 00:13:12.698 "name": "Nvme0", 00:13:12.698 "trtype": "tcp", 00:13:12.698 "traddr": "10.0.0.2", 00:13:12.698 "adrfam": "ipv4", 00:13:12.698 "trsvcid": "4420", 00:13:12.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:12.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:12.698 "hdgst": false, 00:13:12.698 "ddgst": false 00:13:12.698 }, 00:13:12.698 "method": "bdev_nvme_attach_controller" 00:13:12.698 }' 00:13:12.698 [2024-04-19 04:04:27.175989] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:12.698 [2024-04-19 04:04:27.176049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767920 ] 00:13:12.698 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.957 [2024-04-19 04:04:27.257984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.957 [2024-04-19 04:04:27.341043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.215 Running I/O for 1 seconds... 00:13:14.152 00:13:14.152 Latency(us) 00:13:14.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.152 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:14.152 Verification LBA range: start 0x0 length 0x400 00:13:14.152 Nvme0n1 : 1.03 1372.22 85.76 0.00 0.00 45705.82 9175.04 39559.91 00:13:14.152 =================================================================================================================== 00:13:14.152 Total : 1372.22 85.76 0.00 0.00 45705.82 9175.04 39559.91 00:13:14.411 04:04:28 -- target/host_management.sh@102 -- # stoptarget 00:13:14.411 04:04:28 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:14.411 04:04:28 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:14.411 04:04:28 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:14.411 04:04:28 -- target/host_management.sh@40 -- # nvmftestfini 00:13:14.411 04:04:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:14.411 04:04:28 -- nvmf/common.sh@117 -- # sync 00:13:14.411 04:04:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.411 04:04:28 -- nvmf/common.sh@120 -- # set +e 00:13:14.411 04:04:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.411 04:04:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.411 rmmod nvme_tcp 00:13:14.411 rmmod nvme_fabrics 00:13:14.411 rmmod nvme_keyring 00:13:14.411 04:04:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.411 04:04:28 -- nvmf/common.sh@124 -- # set -e 00:13:14.411 04:04:28 -- nvmf/common.sh@125 -- # return 0 00:13:14.411 04:04:28 -- nvmf/common.sh@478 -- # '[' -n 3767536 ']' 00:13:14.411 04:04:28 -- nvmf/common.sh@479 -- # killprocess 3767536 00:13:14.411 04:04:28 -- common/autotest_common.sh@936 -- # '[' -z 3767536 ']' 00:13:14.411 04:04:28 -- common/autotest_common.sh@940 -- # kill -0 3767536 00:13:14.411 04:04:28 -- common/autotest_common.sh@941 -- # uname 00:13:14.411 04:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:14.411 04:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3767536 00:13:14.670 04:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:14.670 04:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:14.670 04:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3767536' 00:13:14.670 killing process with pid 3767536 00:13:14.670 04:04:28 -- common/autotest_common.sh@955 -- # kill 3767536 00:13:14.670 04:04:28 -- common/autotest_common.sh@960 -- # wait 3767536 00:13:14.670 [2024-04-19 04:04:29.175881] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:14.930 04:04:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:14.930 04:04:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:14.930 04:04:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:14.930 04:04:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.930 04:04:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.930 04:04:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.930 04:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.930 04:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.837 04:04:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.837 00:13:16.837 real 0m6.596s 00:13:16.837 user 0m19.611s 00:13:16.837 sys 0m1.094s 00:13:16.837 04:04:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.837 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:16.837 ************************************ 00:13:16.837 END TEST nvmf_host_management 00:13:16.837 ************************************ 00:13:16.837 04:04:31 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:16.837 00:13:16.837 real 0m12.546s 00:13:16.837 user 0m21.200s 00:13:16.837 sys 0m5.450s 00:13:16.837 04:04:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.837 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:16.837 ************************************ 00:13:16.837 END TEST nvmf_host_management 00:13:16.837 ************************************ 00:13:16.837 04:04:31 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.837 04:04:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:16.837 04:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.837 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:17.097 ************************************ 00:13:17.097 START TEST nvmf_lvol 00:13:17.097 ************************************ 00:13:17.097 04:04:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:17.097 * Looking for test storage... 00:13:17.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.097 04:04:31 -- nvmf/common.sh@7 -- # uname -s 00:13:17.097 04:04:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.097 04:04:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.097 04:04:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.097 04:04:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.097 04:04:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.097 04:04:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.097 04:04:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.097 04:04:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.097 04:04:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.097 04:04:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.097 04:04:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:17.097 04:04:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:17.097 04:04:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.097 04:04:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.097 04:04:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.097 04:04:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.097 04:04:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.097 04:04:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.097 04:04:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.097 04:04:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.097 04:04:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.097 04:04:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.097 04:04:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.097 04:04:31 -- paths/export.sh@5 -- # export PATH 00:13:17.097 04:04:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.097 04:04:31 -- nvmf/common.sh@47 -- # : 0 00:13:17.097 04:04:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.097 04:04:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.097 04:04:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.097 04:04:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.097 04:04:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.097 04:04:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.097 04:04:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.097 04:04:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:17.097 04:04:31 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:17.097 04:04:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:17.097 04:04:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.097 04:04:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:17.097 04:04:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:17.097 04:04:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:17.097 04:04:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.097 04:04:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.097 04:04:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.097 04:04:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:17.097 04:04:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:17.097 04:04:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.097 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:13:23.666 04:04:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:23.666 04:04:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:23.666 04:04:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:23.666 04:04:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:23.666 04:04:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:23.666 04:04:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:23.666 04:04:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:23.666 04:04:36 -- nvmf/common.sh@295 -- # net_devs=() 00:13:23.666 04:04:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:23.666 04:04:36 -- nvmf/common.sh@296 -- # e810=() 00:13:23.666 04:04:36 -- nvmf/common.sh@296 -- # local -ga e810 00:13:23.666 04:04:36 -- nvmf/common.sh@297 -- # x722=() 00:13:23.666 04:04:36 -- nvmf/common.sh@297 -- # local -ga x722 00:13:23.666 04:04:36 -- nvmf/common.sh@298 -- # mlx=() 00:13:23.666 04:04:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:23.666 04:04:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.666 04:04:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:23.666 04:04:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:23.666 04:04:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.666 04:04:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:23.666 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:23.666 04:04:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:23.666 04:04:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:23.666 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:23.666 04:04:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.666 04:04:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.666 04:04:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.666 04:04:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:23.666 Found net devices under 0000:af:00.0: cvl_0_0 00:13:23.666 04:04:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.666 04:04:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:23.666 04:04:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.666 04:04:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.666 04:04:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:23.666 Found net devices under 0000:af:00.1: cvl_0_1 00:13:23.666 04:04:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.666 04:04:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:23.666 04:04:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:23.666 04:04:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:23.666 04:04:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.666 04:04:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.666 04:04:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.666 04:04:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:23.666 04:04:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.666 04:04:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.666 04:04:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:23.666 04:04:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.666 04:04:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.666 04:04:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:23.666 04:04:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:23.666 04:04:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.666 04:04:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.666 04:04:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.666 04:04:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.667 04:04:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:23.667 04:04:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.667 04:04:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.667 04:04:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.667 04:04:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:23.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:23.667 00:13:23.667 --- 10.0.0.2 ping statistics --- 00:13:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.667 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:23.667 04:04:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:13:23.667 00:13:23.667 --- 10.0.0.1 ping statistics --- 00:13:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.667 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:13:23.667 04:04:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.667 04:04:37 -- nvmf/common.sh@411 -- # return 0 00:13:23.667 04:04:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:23.667 04:04:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.667 04:04:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:23.667 04:04:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:23.667 04:04:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.667 04:04:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:23.667 04:04:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:23.667 04:04:37 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:23.667 04:04:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:23.667 04:04:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:23.667 04:04:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 04:04:37 -- nvmf/common.sh@470 -- # nvmfpid=3771884 00:13:23.667 04:04:37 -- nvmf/common.sh@471 -- # waitforlisten 3771884 00:13:23.667 04:04:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:23.667 04:04:37 -- common/autotest_common.sh@817 -- # '[' -z 3771884 ']' 00:13:23.667 04:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.667 04:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:23.667 04:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.667 04:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:23.667 04:04:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.667 [2024-04-19 04:04:37.328564] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:23.667 [2024-04-19 04:04:37.328617] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.667 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.667 [2024-04-19 04:04:37.415289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:23.667 [2024-04-19 04:04:37.503770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.667 [2024-04-19 04:04:37.503814] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.667 [2024-04-19 04:04:37.503824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.667 [2024-04-19 04:04:37.503833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.667 [2024-04-19 04:04:37.503840] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.667 [2024-04-19 04:04:37.503890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.667 [2024-04-19 04:04:37.504020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.667 [2024-04-19 04:04:37.504020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.926 04:04:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:23.926 04:04:38 -- common/autotest_common.sh@850 -- # return 0 00:13:23.926 04:04:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:23.926 04:04:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:23.926 04:04:38 -- common/autotest_common.sh@10 -- # set +x 00:13:23.926 04:04:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.926 04:04:38 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:24.184 [2024-04-19 04:04:38.527185] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.184 04:04:38 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.443 04:04:38 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:24.443 04:04:38 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.702 04:04:39 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:24.702 04:04:39 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:24.961 04:04:39 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:25.220 04:04:39 -- target/nvmf_lvol.sh@29 -- # lvs=c2ec7f8a-f0be-415c-8a82-80a40b2709a0 00:13:25.220 04:04:39 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c2ec7f8a-f0be-415c-8a82-80a40b2709a0 lvol 20 00:13:25.479 04:04:39 -- target/nvmf_lvol.sh@32 -- # lvol=57b307a2-1c38-4868-939d-e53a628f411d 00:13:25.479 04:04:39 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:25.738 04:04:40 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57b307a2-1c38-4868-939d-e53a628f411d 00:13:26.046 04:04:40 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:26.307 [2024-04-19 04:04:40.573298] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.307 04:04:40 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.566 04:04:40 -- target/nvmf_lvol.sh@42 -- # perf_pid=3772685 00:13:26.566 04:04:40 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:26.566 04:04:40 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:26.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.503 04:04:41 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 57b307a2-1c38-4868-939d-e53a628f411d MY_SNAPSHOT 00:13:27.762 04:04:42 -- target/nvmf_lvol.sh@47 -- # snapshot=8d0f734e-c4ac-48ca-8fcc-53cfccb7125e 00:13:27.762 04:04:42 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 57b307a2-1c38-4868-939d-e53a628f411d 30 00:13:28.021 04:04:42 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8d0f734e-c4ac-48ca-8fcc-53cfccb7125e MY_CLONE 00:13:28.280 04:04:42 -- target/nvmf_lvol.sh@49 -- # clone=01f3ff4d-573e-4fcc-abde-078ad48ccc23 00:13:28.280 04:04:42 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 01f3ff4d-573e-4fcc-abde-078ad48ccc23 00:13:29.216 04:04:43 -- target/nvmf_lvol.sh@53 -- # wait 3772685 00:13:37.337 Initializing NVMe Controllers 00:13:37.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:37.337 Controller IO queue size 128, less than required. 00:13:37.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:37.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:37.337 Initialization complete. Launching workers. 00:13:37.337 ======================================================== 00:13:37.337 Latency(us) 00:13:37.337 Device Information : IOPS MiB/s Average min max 00:13:37.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8406.40 32.84 15236.65 2145.39 78442.16 00:13:37.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8336.00 32.56 15366.78 3306.25 81764.20 00:13:37.337 ======================================================== 00:13:37.337 Total : 16742.40 65.40 15301.44 2145.39 81764.20 00:13:37.337 00:13:37.337 04:04:51 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:37.337 04:04:51 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57b307a2-1c38-4868-939d-e53a628f411d 00:13:37.596 04:04:51 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2ec7f8a-f0be-415c-8a82-80a40b2709a0 00:13:37.856 04:04:52 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:37.856 04:04:52 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:37.856 04:04:52 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:37.856 04:04:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:37.856 04:04:52 -- nvmf/common.sh@117 -- # sync 00:13:37.856 04:04:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.856 04:04:52 -- nvmf/common.sh@120 -- # set +e 00:13:37.856 04:04:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.856 04:04:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.856 rmmod nvme_tcp 00:13:37.856 rmmod nvme_fabrics 00:13:37.856 rmmod nvme_keyring 00:13:37.856 04:04:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.856 04:04:52 -- nvmf/common.sh@124 -- # set -e 00:13:37.856 04:04:52 -- nvmf/common.sh@125 -- # return 0 00:13:37.856 04:04:52 -- nvmf/common.sh@478 -- # '[' -n 3771884 ']' 00:13:37.856 04:04:52 -- nvmf/common.sh@479 -- # killprocess 3771884 00:13:37.856 04:04:52 -- common/autotest_common.sh@936 -- # '[' -z 3771884 ']' 00:13:37.856 04:04:52 -- common/autotest_common.sh@940 -- # kill -0 3771884 00:13:37.856 04:04:52 -- common/autotest_common.sh@941 -- # uname 00:13:37.856 04:04:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.856 04:04:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3771884 00:13:37.856 04:04:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:37.856 04:04:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:37.856 04:04:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3771884' 00:13:37.856 killing process with pid 3771884 00:13:37.856 04:04:52 -- common/autotest_common.sh@955 -- # kill 3771884 00:13:37.856 04:04:52 -- common/autotest_common.sh@960 -- # wait 3771884 00:13:38.115 04:04:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:38.115 04:04:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:38.115 04:04:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:38.115 04:04:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.115 04:04:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.115 04:04:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.115 04:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.115 04:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.650 04:04:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.650 00:13:40.650 real 0m23.132s 00:13:40.650 user 1m8.974s 00:13:40.650 sys 0m7.113s 00:13:40.650 04:04:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:40.650 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.650 ************************************ 00:13:40.650 END TEST nvmf_lvol 00:13:40.650 ************************************ 00:13:40.650 04:04:54 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:40.650 04:04:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.650 04:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.650 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.650 ************************************ 00:13:40.650 START TEST nvmf_lvs_grow 00:13:40.650 ************************************ 00:13:40.650 04:04:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:40.650 * Looking for test storage... 00:13:40.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.650 04:04:54 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.650 04:04:54 -- nvmf/common.sh@7 -- # uname -s 00:13:40.650 04:04:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.650 04:04:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.650 04:04:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.650 04:04:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.650 04:04:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.650 04:04:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.650 04:04:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.650 04:04:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.650 04:04:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.650 04:04:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.650 04:04:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:40.650 04:04:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:40.650 04:04:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.650 04:04:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.650 04:04:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.650 04:04:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.650 04:04:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.650 04:04:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.650 04:04:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.650 04:04:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.650 04:04:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.650 04:04:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.650 04:04:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.650 04:04:54 -- paths/export.sh@5 -- # export PATH 00:13:40.650 04:04:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.650 04:04:54 -- nvmf/common.sh@47 -- # : 0 00:13:40.650 04:04:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.650 04:04:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.650 04:04:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.650 04:04:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.650 04:04:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.650 04:04:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.650 04:04:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.650 04:04:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.650 04:04:54 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.650 04:04:54 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.650 04:04:54 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:40.650 04:04:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:40.650 04:04:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.650 04:04:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:40.650 04:04:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:40.650 04:04:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:40.650 04:04:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.650 04:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.650 04:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.650 04:04:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:40.650 04:04:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:40.650 04:04:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.650 04:04:54 -- common/autotest_common.sh@10 -- # set +x 00:13:45.946 04:05:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:45.946 04:05:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.946 04:05:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.946 04:05:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.946 04:05:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.946 04:05:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.946 04:05:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.946 04:05:00 -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.946 04:05:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.946 04:05:00 -- nvmf/common.sh@296 -- # e810=() 00:13:45.946 04:05:00 -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.946 04:05:00 -- nvmf/common.sh@297 -- # x722=() 00:13:45.946 04:05:00 -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.946 04:05:00 -- nvmf/common.sh@298 -- # mlx=() 00:13:45.946 04:05:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.946 04:05:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.946 04:05:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.946 04:05:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.946 04:05:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.946 04:05:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.946 04:05:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:45.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:45.946 04:05:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.946 04:05:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:45.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:45.946 04:05:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.946 04:05:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.946 04:05:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.946 04:05:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.946 04:05:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:45.946 04:05:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.946 04:05:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:45.946 Found net devices under 0000:af:00.0: cvl_0_0 00:13:45.946 04:05:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.946 04:05:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.946 04:05:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.946 04:05:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:45.946 04:05:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.946 04:05:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:45.947 Found net devices under 0000:af:00.1: cvl_0_1 00:13:45.947 04:05:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.947 04:05:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:45.947 04:05:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:45.947 04:05:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:45.947 04:05:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:45.947 04:05:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:45.947 04:05:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.947 04:05:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.947 04:05:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.947 04:05:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.947 04:05:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.947 04:05:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.947 04:05:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.947 04:05:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.947 04:05:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.947 04:05:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.947 04:05:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.947 04:05:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.947 04:05:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.947 04:05:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.947 04:05:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.947 04:05:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.947 04:05:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.207 04:05:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.207 04:05:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.207 04:05:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:13:46.207 00:13:46.207 --- 10.0.0.2 ping statistics --- 00:13:46.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.207 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:13:46.207 04:05:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:13:46.207 00:13:46.207 --- 10.0.0.1 ping statistics --- 00:13:46.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.207 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:13:46.207 04:05:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.207 04:05:00 -- nvmf/common.sh@411 -- # return 0 00:13:46.207 04:05:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:46.207 04:05:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.207 04:05:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:46.207 04:05:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:46.207 04:05:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.207 04:05:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:46.207 04:05:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:46.207 04:05:00 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:46.207 04:05:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:46.207 04:05:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:46.207 04:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:46.207 04:05:00 -- nvmf/common.sh@470 -- # nvmfpid=3778261 00:13:46.207 04:05:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:46.207 04:05:00 -- nvmf/common.sh@471 -- # waitforlisten 3778261 00:13:46.207 04:05:00 -- common/autotest_common.sh@817 -- # '[' -z 3778261 ']' 00:13:46.207 04:05:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.207 04:05:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:46.207 04:05:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.207 04:05:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:46.207 04:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:46.207 [2024-04-19 04:05:00.666472] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:46.207 [2024-04-19 04:05:00.666529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.207 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.467 [2024-04-19 04:05:00.750019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.467 [2024-04-19 04:05:00.837995] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.467 [2024-04-19 04:05:00.838037] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.467 [2024-04-19 04:05:00.838046] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.467 [2024-04-19 04:05:00.838055] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.467 [2024-04-19 04:05:00.838062] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.467 [2024-04-19 04:05:00.838084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.406 04:05:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:47.406 04:05:01 -- common/autotest_common.sh@850 -- # return 0 00:13:47.406 04:05:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:47.406 04:05:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:47.406 04:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.406 04:05:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.406 04:05:01 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.406 [2024-04-19 04:05:01.855429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.406 04:05:01 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:47.406 04:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:47.406 04:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.406 04:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.665 ************************************ 00:13:47.665 START TEST lvs_grow_clean 00:13:47.665 ************************************ 00:13:47.665 04:05:02 -- common/autotest_common.sh@1111 -- # lvs_grow 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.665 04:05:02 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.924 04:05:02 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:47.924 04:05:02 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:48.182 04:05:02 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ad6af993-0fd0-4a24-babd-cb057cef4575 00:13:48.182 04:05:02 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:13:48.182 04:05:02 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:48.441 04:05:02 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:48.441 04:05:02 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:48.441 04:05:02 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ad6af993-0fd0-4a24-babd-cb057cef4575 lvol 150 00:13:48.700 04:05:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bffeb01f-fac0-4637-a198-e77d9dbf2a90 00:13:48.700 04:05:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:48.700 04:05:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:48.959 [2024-04-19 04:05:03.237941] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:48.959 [2024-04-19 04:05:03.238001] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:48.959 true 00:13:48.959 04:05:03 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:13:48.959 04:05:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:49.218 04:05:03 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:49.218 04:05:03 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:49.218 04:05:03 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bffeb01f-fac0-4637-a198-e77d9dbf2a90 00:13:49.477 04:05:03 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:49.736 [2024-04-19 04:05:04.196886] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.736 04:05:04 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.994 04:05:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3779092 00:13:49.994 04:05:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:49.994 04:05:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.994 04:05:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3779092 /var/tmp/bdevperf.sock 00:13:49.994 04:05:04 -- common/autotest_common.sh@817 -- # '[' -z 3779092 ']' 00:13:49.994 04:05:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.994 04:05:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:49.994 04:05:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.994 04:05:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:49.994 04:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:49.994 [2024-04-19 04:05:04.501626] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:13:49.994 [2024-04-19 04:05:04.501685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779092 ] 00:13:50.253 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.253 [2024-04-19 04:05:04.574892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.253 [2024-04-19 04:05:04.664010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.253 04:05:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:50.253 04:05:04 -- common/autotest_common.sh@850 -- # return 0 00:13:50.253 04:05:04 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:50.820 Nvme0n1 00:13:50.820 04:05:05 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:51.080 [ 00:13:51.080 { 00:13:51.080 "name": "Nvme0n1", 00:13:51.080 "aliases": [ 00:13:51.080 "bffeb01f-fac0-4637-a198-e77d9dbf2a90" 00:13:51.080 ], 00:13:51.080 "product_name": "NVMe disk", 00:13:51.080 "block_size": 4096, 00:13:51.080 "num_blocks": 38912, 00:13:51.080 "uuid": "bffeb01f-fac0-4637-a198-e77d9dbf2a90", 00:13:51.080 "assigned_rate_limits": { 00:13:51.080 "rw_ios_per_sec": 0, 00:13:51.080 "rw_mbytes_per_sec": 0, 00:13:51.080 "r_mbytes_per_sec": 0, 00:13:51.080 "w_mbytes_per_sec": 0 00:13:51.080 }, 00:13:51.080 "claimed": false, 00:13:51.080 "zoned": false, 00:13:51.080 "supported_io_types": { 00:13:51.080 "read": true, 00:13:51.080 "write": true, 00:13:51.080 "unmap": true, 00:13:51.080 "write_zeroes": true, 00:13:51.080 "flush": true, 00:13:51.080 "reset": true, 00:13:51.080 "compare": true, 00:13:51.080 "compare_and_write": true, 00:13:51.080 "abort": true, 00:13:51.080 "nvme_admin": true, 00:13:51.080 "nvme_io": true 00:13:51.080 }, 00:13:51.080 "memory_domains": [ 00:13:51.080 { 00:13:51.080 "dma_device_id": "system", 00:13:51.080 "dma_device_type": 1 00:13:51.080 } 00:13:51.080 ], 00:13:51.080 "driver_specific": { 00:13:51.080 "nvme": [ 00:13:51.080 { 00:13:51.080 "trid": { 00:13:51.080 "trtype": "TCP", 00:13:51.080 "adrfam": "IPv4", 00:13:51.080 "traddr": "10.0.0.2", 00:13:51.080 "trsvcid": "4420", 00:13:51.080 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:51.080 }, 00:13:51.080 "ctrlr_data": { 00:13:51.080 "cntlid": 1, 00:13:51.080 "vendor_id": "0x8086", 00:13:51.080 "model_number": "SPDK bdev Controller", 00:13:51.080 "serial_number": "SPDK0", 00:13:51.080 "firmware_revision": "24.05", 00:13:51.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.080 "oacs": { 00:13:51.080 "security": 0, 00:13:51.080 "format": 0, 00:13:51.080 "firmware": 0, 00:13:51.080 "ns_manage": 0 00:13:51.080 }, 00:13:51.080 "multi_ctrlr": true, 00:13:51.080 "ana_reporting": false 00:13:51.080 }, 00:13:51.080 "vs": { 00:13:51.080 "nvme_version": "1.3" 00:13:51.080 }, 00:13:51.080 "ns_data": { 00:13:51.080 "id": 1, 00:13:51.080 "can_share": true 00:13:51.080 } 00:13:51.080 } 00:13:51.080 ], 00:13:51.080 "mp_policy": "active_passive" 00:13:51.080 } 00:13:51.080 } 00:13:51.080 ] 00:13:51.080 04:05:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3779238 00:13:51.080 04:05:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:51.080 04:05:05 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:51.080 Running I/O for 10 seconds... 00:13:52.040 Latency(us) 00:13:52.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.040 Nvme0n1 : 1.00 14932.00 58.33 0.00 0.00 0.00 0.00 0.00 00:13:52.040 =================================================================================================================== 00:13:52.040 Total : 14932.00 58.33 0.00 0.00 0.00 0.00 0.00 00:13:52.040 00:13:52.991 04:05:07 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:13:53.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.249 Nvme0n1 : 2.00 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:13:53.249 =================================================================================================================== 00:13:53.249 Total : 15058.00 58.82 0.00 0.00 0.00 0.00 0.00 00:13:53.249 00:13:53.249 true 00:13:53.249 04:05:07 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:13:53.249 04:05:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:53.508 04:05:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:53.508 04:05:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:53.508 04:05:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 3779238 00:13:54.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.074 Nvme0n1 : 3.00 15078.33 58.90 0.00 0.00 0.00 0.00 0.00 00:13:54.074 =================================================================================================================== 00:13:54.074 Total : 15078.33 58.90 0.00 0.00 0.00 0.00 0.00 00:13:54.074 00:13:55.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.451 Nvme0n1 : 4.00 15121.75 59.07 0.00 0.00 0.00 0.00 0.00 00:13:55.451 =================================================================================================================== 00:13:55.451 Total : 15121.75 59.07 0.00 0.00 0.00 0.00 0.00 00:13:55.451 00:13:56.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.387 Nvme0n1 : 5.00 15148.40 59.17 0.00 0.00 0.00 0.00 0.00 00:13:56.387 =================================================================================================================== 00:13:56.387 Total : 15148.40 59.17 0.00 0.00 0.00 0.00 0.00 00:13:56.387 00:13:57.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.323 Nvme0n1 : 6.00 15156.33 59.20 0.00 0.00 0.00 0.00 0.00 00:13:57.323 =================================================================================================================== 00:13:57.323 Total : 15156.33 59.20 0.00 0.00 0.00 0.00 0.00 00:13:57.323 00:13:58.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.259 Nvme0n1 : 7.00 15178.29 59.29 0.00 0.00 0.00 0.00 0.00 00:13:58.259 =================================================================================================================== 00:13:58.259 Total : 15178.29 59.29 0.00 0.00 0.00 0.00 0.00 00:13:58.259 00:13:59.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.195 Nvme0n1 : 8.00 15191.25 59.34 0.00 0.00 0.00 0.00 0.00 00:13:59.195 =================================================================================================================== 00:13:59.195 Total : 15191.25 59.34 0.00 0.00 0.00 0.00 0.00 00:13:59.195 00:14:00.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.128 Nvme0n1 : 9.00 15204.33 59.39 0.00 0.00 0.00 0.00 0.00 00:14:00.128 =================================================================================================================== 00:14:00.128 Total : 15204.33 59.39 0.00 0.00 0.00 0.00 0.00 00:14:00.128 00:14:01.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.062 Nvme0n1 : 10.00 15215.80 59.44 0.00 0.00 0.00 0.00 0.00 00:14:01.062 =================================================================================================================== 00:14:01.062 Total : 15215.80 59.44 0.00 0.00 0.00 0.00 0.00 00:14:01.062 00:14:01.062 00:14:01.062 Latency(us) 00:14:01.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.062 Nvme0n1 : 10.01 15215.36 59.43 0.00 0.00 8407.41 4855.62 14894.55 00:14:01.062 =================================================================================================================== 00:14:01.062 Total : 15215.36 59.43 0.00 0.00 8407.41 4855.62 14894.55 00:14:01.062 0 00:14:01.062 04:05:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3779092 00:14:01.062 04:05:15 -- common/autotest_common.sh@936 -- # '[' -z 3779092 ']' 00:14:01.062 04:05:15 -- common/autotest_common.sh@940 -- # kill -0 3779092 00:14:01.320 04:05:15 -- common/autotest_common.sh@941 -- # uname 00:14:01.320 04:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:01.320 04:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3779092 00:14:01.320 04:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:01.320 04:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:01.320 04:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3779092' 00:14:01.320 killing process with pid 3779092 00:14:01.320 04:05:15 -- common/autotest_common.sh@955 -- # kill 3779092 00:14:01.320 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.320 00:14:01.320 Latency(us) 00:14:01.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.320 =================================================================================================================== 00:14:01.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.320 04:05:15 -- common/autotest_common.sh@960 -- # wait 3779092 00:14:01.578 04:05:15 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:01.578 04:05:16 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:01.578 04:05:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:01.836 04:05:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:01.836 04:05:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:01.836 04:05:16 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:02.095 [2024-04-19 04:05:16.431130] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:02.095 04:05:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:02.095 04:05:16 -- common/autotest_common.sh@638 -- # local es=0 00:14:02.095 04:05:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:02.095 04:05:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.095 04:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:02.095 04:05:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.095 04:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:02.095 04:05:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.095 04:05:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:02.095 04:05:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:02.095 04:05:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:02.095 04:05:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:02.354 request: 00:14:02.354 { 00:14:02.354 "uuid": "ad6af993-0fd0-4a24-babd-cb057cef4575", 00:14:02.354 "method": "bdev_lvol_get_lvstores", 00:14:02.354 "req_id": 1 00:14:02.354 } 00:14:02.354 Got JSON-RPC error response 00:14:02.354 response: 00:14:02.354 { 00:14:02.354 "code": -19, 00:14:02.354 "message": "No such device" 00:14:02.354 } 00:14:02.354 04:05:16 -- common/autotest_common.sh@641 -- # es=1 00:14:02.354 04:05:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:02.354 04:05:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:02.354 04:05:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:02.354 04:05:16 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.612 aio_bdev 00:14:02.612 04:05:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bffeb01f-fac0-4637-a198-e77d9dbf2a90 00:14:02.612 04:05:16 -- common/autotest_common.sh@885 -- # local bdev_name=bffeb01f-fac0-4637-a198-e77d9dbf2a90 00:14:02.612 04:05:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:02.612 04:05:16 -- common/autotest_common.sh@887 -- # local i 00:14:02.612 04:05:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:02.612 04:05:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:02.612 04:05:16 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:02.871 04:05:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bffeb01f-fac0-4637-a198-e77d9dbf2a90 -t 2000 00:14:03.129 [ 00:14:03.129 { 00:14:03.129 "name": "bffeb01f-fac0-4637-a198-e77d9dbf2a90", 00:14:03.129 "aliases": [ 00:14:03.129 "lvs/lvol" 00:14:03.129 ], 00:14:03.129 "product_name": "Logical Volume", 00:14:03.129 "block_size": 4096, 00:14:03.130 "num_blocks": 38912, 00:14:03.130 "uuid": "bffeb01f-fac0-4637-a198-e77d9dbf2a90", 00:14:03.130 "assigned_rate_limits": { 00:14:03.130 "rw_ios_per_sec": 0, 00:14:03.130 "rw_mbytes_per_sec": 0, 00:14:03.130 "r_mbytes_per_sec": 0, 00:14:03.130 "w_mbytes_per_sec": 0 00:14:03.130 }, 00:14:03.130 "claimed": false, 00:14:03.130 "zoned": false, 00:14:03.130 "supported_io_types": { 00:14:03.130 "read": true, 00:14:03.130 "write": true, 00:14:03.130 "unmap": true, 00:14:03.130 "write_zeroes": true, 00:14:03.130 "flush": false, 00:14:03.130 "reset": true, 00:14:03.130 "compare": false, 00:14:03.130 "compare_and_write": false, 00:14:03.130 "abort": false, 00:14:03.130 "nvme_admin": false, 00:14:03.130 "nvme_io": false 00:14:03.130 }, 00:14:03.130 "driver_specific": { 00:14:03.130 "lvol": { 00:14:03.130 "lvol_store_uuid": "ad6af993-0fd0-4a24-babd-cb057cef4575", 00:14:03.130 "base_bdev": "aio_bdev", 00:14:03.130 "thin_provision": false, 00:14:03.130 "snapshot": false, 00:14:03.130 "clone": false, 00:14:03.130 "esnap_clone": false 00:14:03.130 } 00:14:03.130 } 00:14:03.130 } 00:14:03.130 ] 00:14:03.130 04:05:17 -- common/autotest_common.sh@893 -- # return 0 00:14:03.130 04:05:17 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:03.130 04:05:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:03.388 04:05:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:03.388 04:05:17 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:03.388 04:05:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:03.646 04:05:17 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:03.646 04:05:17 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bffeb01f-fac0-4637-a198-e77d9dbf2a90 00:14:03.646 04:05:18 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad6af993-0fd0-4a24-babd-cb057cef4575 00:14:04.212 04:05:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:04.212 04:05:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:04.212 00:14:04.212 real 0m16.705s 00:14:04.212 user 0m16.506s 00:14:04.212 sys 0m1.502s 00:14:04.212 04:05:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:04.212 04:05:18 -- common/autotest_common.sh@10 -- # set +x 00:14:04.212 ************************************ 00:14:04.212 END TEST lvs_grow_clean 00:14:04.212 ************************************ 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:04.471 04:05:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.471 04:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.471 04:05:18 -- common/autotest_common.sh@10 -- # set +x 00:14:04.471 ************************************ 00:14:04.471 START TEST lvs_grow_dirty 00:14:04.471 ************************************ 00:14:04.471 04:05:18 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:04.471 04:05:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:04.729 04:05:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:04.729 04:05:19 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:04.987 04:05:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:04.987 04:05:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:04.987 04:05:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:05.245 04:05:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:05.245 04:05:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:05.245 04:05:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e lvol 150 00:14:05.504 04:05:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:05.504 04:05:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:05.504 04:05:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:05.762 [2024-04-19 04:05:20.102075] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:05.762 [2024-04-19 04:05:20.102139] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:05.762 true 00:14:05.762 04:05:20 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:05.762 04:05:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:06.020 04:05:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:06.020 04:05:20 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:06.278 04:05:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:06.536 04:05:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:06.794 04:05:21 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.794 04:05:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3782078 00:14:06.794 04:05:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.794 04:05:21 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:06.794 04:05:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3782078 /var/tmp/bdevperf.sock 00:14:06.794 04:05:21 -- common/autotest_common.sh@817 -- # '[' -z 3782078 ']' 00:14:06.794 04:05:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.794 04:05:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.794 04:05:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.794 04:05:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.794 04:05:21 -- common/autotest_common.sh@10 -- # set +x 00:14:07.051 [2024-04-19 04:05:21.359911] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:07.052 [2024-04-19 04:05:21.359971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782078 ] 00:14:07.052 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.052 [2024-04-19 04:05:21.433226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.052 [2024-04-19 04:05:21.518203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.310 04:05:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.310 04:05:21 -- common/autotest_common.sh@850 -- # return 0 00:14:07.310 04:05:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:07.569 Nvme0n1 00:14:07.569 04:05:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:07.827 [ 00:14:07.827 { 00:14:07.827 "name": "Nvme0n1", 00:14:07.827 "aliases": [ 00:14:07.827 "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61" 00:14:07.827 ], 00:14:07.827 "product_name": "NVMe disk", 00:14:07.827 "block_size": 4096, 00:14:07.827 "num_blocks": 38912, 00:14:07.827 "uuid": "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61", 00:14:07.827 "assigned_rate_limits": { 00:14:07.827 "rw_ios_per_sec": 0, 00:14:07.827 "rw_mbytes_per_sec": 0, 00:14:07.827 "r_mbytes_per_sec": 0, 00:14:07.827 "w_mbytes_per_sec": 0 00:14:07.827 }, 00:14:07.827 "claimed": false, 00:14:07.827 "zoned": false, 00:14:07.827 "supported_io_types": { 00:14:07.827 "read": true, 00:14:07.827 "write": true, 00:14:07.827 "unmap": true, 00:14:07.827 "write_zeroes": true, 00:14:07.827 "flush": true, 00:14:07.827 "reset": true, 00:14:07.827 "compare": true, 00:14:07.827 "compare_and_write": true, 00:14:07.827 "abort": true, 00:14:07.827 "nvme_admin": true, 00:14:07.827 "nvme_io": true 00:14:07.827 }, 00:14:07.827 "memory_domains": [ 00:14:07.827 { 00:14:07.827 "dma_device_id": "system", 00:14:07.827 "dma_device_type": 1 00:14:07.827 } 00:14:07.827 ], 00:14:07.827 "driver_specific": { 00:14:07.827 "nvme": [ 00:14:07.827 { 00:14:07.827 "trid": { 00:14:07.827 "trtype": "TCP", 00:14:07.827 "adrfam": "IPv4", 00:14:07.827 "traddr": "10.0.0.2", 00:14:07.827 "trsvcid": "4420", 00:14:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:07.827 }, 00:14:07.827 "ctrlr_data": { 00:14:07.827 "cntlid": 1, 00:14:07.827 "vendor_id": "0x8086", 00:14:07.827 "model_number": "SPDK bdev Controller", 00:14:07.827 "serial_number": "SPDK0", 00:14:07.827 "firmware_revision": "24.05", 00:14:07.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:07.827 "oacs": { 00:14:07.827 "security": 0, 00:14:07.827 "format": 0, 00:14:07.827 "firmware": 0, 00:14:07.827 "ns_manage": 0 00:14:07.827 }, 00:14:07.827 "multi_ctrlr": true, 00:14:07.827 "ana_reporting": false 00:14:07.827 }, 00:14:07.827 "vs": { 00:14:07.827 "nvme_version": "1.3" 00:14:07.827 }, 00:14:07.827 "ns_data": { 00:14:07.827 "id": 1, 00:14:07.827 "can_share": true 00:14:07.827 } 00:14:07.827 } 00:14:07.827 ], 00:14:07.827 "mp_policy": "active_passive" 00:14:07.827 } 00:14:07.827 } 00:14:07.827 ] 00:14:07.827 04:05:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3782304 00:14:07.827 04:05:22 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.827 04:05:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:07.827 Running I/O for 10 seconds... 00:14:09.210 Latency(us) 00:14:09.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.210 Nvme0n1 : 1.00 14934.00 58.34 0.00 0.00 0.00 0.00 0.00 00:14:09.210 =================================================================================================================== 00:14:09.210 Total : 14934.00 58.34 0.00 0.00 0.00 0.00 0.00 00:14:09.210 00:14:09.791 04:05:24 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:10.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.059 Nvme0n1 : 2.00 15032.50 58.72 0.00 0.00 0.00 0.00 0.00 00:14:10.059 =================================================================================================================== 00:14:10.059 Total : 15032.50 58.72 0.00 0.00 0.00 0.00 0.00 00:14:10.059 00:14:10.059 true 00:14:10.059 04:05:24 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:10.059 04:05:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:10.316 04:05:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:10.316 04:05:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:10.316 04:05:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 3782304 00:14:10.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.881 Nvme0n1 : 3.00 15085.00 58.93 0.00 0.00 0.00 0.00 0.00 00:14:10.881 =================================================================================================================== 00:14:10.881 Total : 15085.00 58.93 0.00 0.00 0.00 0.00 0.00 00:14:10.881 00:14:12.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.252 Nvme0n1 : 4.00 15093.50 58.96 0.00 0.00 0.00 0.00 0.00 00:14:12.252 =================================================================================================================== 00:14:12.252 Total : 15093.50 58.96 0.00 0.00 0.00 0.00 0.00 00:14:12.252 00:14:13.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.185 Nvme0n1 : 5.00 15126.40 59.09 0.00 0.00 0.00 0.00 0.00 00:14:13.185 =================================================================================================================== 00:14:13.185 Total : 15126.40 59.09 0.00 0.00 0.00 0.00 0.00 00:14:13.185 00:14:14.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.120 Nvme0n1 : 6.00 15137.00 59.13 0.00 0.00 0.00 0.00 0.00 00:14:14.120 =================================================================================================================== 00:14:14.120 Total : 15137.00 59.13 0.00 0.00 0.00 0.00 0.00 00:14:14.120 00:14:15.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.053 Nvme0n1 : 7.00 15161.86 59.23 0.00 0.00 0.00 0.00 0.00 00:14:15.053 =================================================================================================================== 00:14:15.054 Total : 15161.86 59.23 0.00 0.00 0.00 0.00 0.00 00:14:15.054 00:14:15.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.987 Nvme0n1 : 8.00 15172.75 59.27 0.00 0.00 0.00 0.00 0.00 00:14:15.987 =================================================================================================================== 00:14:15.987 Total : 15172.75 59.27 0.00 0.00 0.00 0.00 0.00 00:14:15.987 00:14:16.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.919 Nvme0n1 : 9.00 15181.89 59.30 0.00 0.00 0.00 0.00 0.00 00:14:16.919 =================================================================================================================== 00:14:16.919 Total : 15181.89 59.30 0.00 0.00 0.00 0.00 0.00 00:14:16.919 00:14:17.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.915 Nvme0n1 : 10.00 15194.20 59.35 0.00 0.00 0.00 0.00 0.00 00:14:17.915 =================================================================================================================== 00:14:17.915 Total : 15194.20 59.35 0.00 0.00 0.00 0.00 0.00 00:14:17.915 00:14:17.915 00:14:17.915 Latency(us) 00:14:17.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.915 Nvme0n1 : 10.01 15194.23 59.35 0.00 0.00 8418.81 2144.81 16920.20 00:14:17.915 =================================================================================================================== 00:14:17.915 Total : 15194.23 59.35 0.00 0.00 8418.81 2144.81 16920.20 00:14:17.915 0 00:14:17.915 04:05:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3782078 00:14:17.915 04:05:32 -- common/autotest_common.sh@936 -- # '[' -z 3782078 ']' 00:14:17.915 04:05:32 -- common/autotest_common.sh@940 -- # kill -0 3782078 00:14:17.915 04:05:32 -- common/autotest_common.sh@941 -- # uname 00:14:17.915 04:05:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.915 04:05:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3782078 00:14:17.915 04:05:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:17.915 04:05:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:17.915 04:05:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3782078' 00:14:17.915 killing process with pid 3782078 00:14:17.915 04:05:32 -- common/autotest_common.sh@955 -- # kill 3782078 00:14:17.915 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.915 00:14:17.915 Latency(us) 00:14:17.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.915 =================================================================================================================== 00:14:17.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.915 04:05:32 -- common/autotest_common.sh@960 -- # wait 3782078 00:14:18.173 04:05:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:18.432 04:05:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:18.432 04:05:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3778261 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@74 -- # wait 3778261 00:14:18.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3778261 Killed "${NVMF_APP[@]}" "$@" 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:18.690 04:05:33 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:18.690 04:05:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:18.690 04:05:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:18.690 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:18.690 04:05:33 -- nvmf/common.sh@470 -- # nvmfpid=3784282 00:14:18.690 04:05:33 -- nvmf/common.sh@471 -- # waitforlisten 3784282 00:14:18.690 04:05:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:18.690 04:05:33 -- common/autotest_common.sh@817 -- # '[' -z 3784282 ']' 00:14:18.949 04:05:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.949 04:05:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.949 04:05:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.949 04:05:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.949 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:18.949 [2024-04-19 04:05:33.265720] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:18.949 [2024-04-19 04:05:33.265778] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.949 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.949 [2024-04-19 04:05:33.351414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.949 [2024-04-19 04:05:33.439562] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.949 [2024-04-19 04:05:33.439607] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.949 [2024-04-19 04:05:33.439617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.949 [2024-04-19 04:05:33.439626] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.949 [2024-04-19 04:05:33.439634] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.949 [2024-04-19 04:05:33.439660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.207 04:05:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.207 04:05:33 -- common/autotest_common.sh@850 -- # return 0 00:14:19.207 04:05:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:19.207 04:05:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:19.207 04:05:33 -- common/autotest_common.sh@10 -- # set +x 00:14:19.207 04:05:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.207 04:05:33 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:19.465 [2024-04-19 04:05:33.803020] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:19.465 [2024-04-19 04:05:33.803128] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:19.465 [2024-04-19 04:05:33.803165] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:19.465 04:05:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:19.465 04:05:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:19.465 04:05:33 -- common/autotest_common.sh@885 -- # local bdev_name=e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:19.465 04:05:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:19.465 04:05:33 -- common/autotest_common.sh@887 -- # local i 00:14:19.465 04:05:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:19.465 04:05:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:19.465 04:05:33 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:19.723 04:05:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 -t 2000 00:14:19.981 [ 00:14:19.981 { 00:14:19.981 "name": "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61", 00:14:19.981 "aliases": [ 00:14:19.981 "lvs/lvol" 00:14:19.981 ], 00:14:19.981 "product_name": "Logical Volume", 00:14:19.981 "block_size": 4096, 00:14:19.981 "num_blocks": 38912, 00:14:19.981 "uuid": "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61", 00:14:19.981 "assigned_rate_limits": { 00:14:19.981 "rw_ios_per_sec": 0, 00:14:19.981 "rw_mbytes_per_sec": 0, 00:14:19.981 "r_mbytes_per_sec": 0, 00:14:19.981 "w_mbytes_per_sec": 0 00:14:19.981 }, 00:14:19.981 "claimed": false, 00:14:19.981 "zoned": false, 00:14:19.981 "supported_io_types": { 00:14:19.981 "read": true, 00:14:19.981 "write": true, 00:14:19.981 "unmap": true, 00:14:19.981 "write_zeroes": true, 00:14:19.981 "flush": false, 00:14:19.981 "reset": true, 00:14:19.981 "compare": false, 00:14:19.981 "compare_and_write": false, 00:14:19.981 "abort": false, 00:14:19.981 "nvme_admin": false, 00:14:19.981 "nvme_io": false 00:14:19.981 }, 00:14:19.981 "driver_specific": { 00:14:19.981 "lvol": { 00:14:19.981 "lvol_store_uuid": "8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e", 00:14:19.981 "base_bdev": "aio_bdev", 00:14:19.981 "thin_provision": false, 00:14:19.981 "snapshot": false, 00:14:19.981 "clone": false, 00:14:19.981 "esnap_clone": false 00:14:19.981 } 00:14:19.981 } 00:14:19.981 } 00:14:19.981 ] 00:14:19.981 04:05:34 -- common/autotest_common.sh@893 -- # return 0 00:14:19.981 04:05:34 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:19.981 04:05:34 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:20.239 04:05:34 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:20.239 04:05:34 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:20.239 04:05:34 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:20.498 04:05:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:20.498 04:05:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:20.498 [2024-04-19 04:05:35.003787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:20.755 04:05:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:20.755 04:05:35 -- common/autotest_common.sh@638 -- # local es=0 00:14:20.755 04:05:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:20.755 04:05:35 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.755 04:05:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.755 04:05:35 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.755 04:05:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.755 04:05:35 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.755 04:05:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.755 04:05:35 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.755 04:05:35 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:20.755 04:05:35 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:20.755 request: 00:14:20.755 { 00:14:20.755 "uuid": "8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e", 00:14:20.755 "method": "bdev_lvol_get_lvstores", 00:14:20.755 "req_id": 1 00:14:20.756 } 00:14:20.756 Got JSON-RPC error response 00:14:20.756 response: 00:14:20.756 { 00:14:20.756 "code": -19, 00:14:20.756 "message": "No such device" 00:14:20.756 } 00:14:21.013 04:05:35 -- common/autotest_common.sh@641 -- # es=1 00:14:21.013 04:05:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:21.013 04:05:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:21.013 04:05:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:21.013 04:05:35 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:21.013 aio_bdev 00:14:21.271 04:05:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:21.271 04:05:35 -- common/autotest_common.sh@885 -- # local bdev_name=e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:21.271 04:05:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:21.271 04:05:35 -- common/autotest_common.sh@887 -- # local i 00:14:21.271 04:05:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:21.271 04:05:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:21.271 04:05:35 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:21.271 04:05:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 -t 2000 00:14:21.528 [ 00:14:21.528 { 00:14:21.528 "name": "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61", 00:14:21.528 "aliases": [ 00:14:21.528 "lvs/lvol" 00:14:21.528 ], 00:14:21.528 "product_name": "Logical Volume", 00:14:21.528 "block_size": 4096, 00:14:21.528 "num_blocks": 38912, 00:14:21.528 "uuid": "e1f5c2b1-c663-4126-bbd0-d1e9f809dc61", 00:14:21.528 "assigned_rate_limits": { 00:14:21.528 "rw_ios_per_sec": 0, 00:14:21.528 "rw_mbytes_per_sec": 0, 00:14:21.528 "r_mbytes_per_sec": 0, 00:14:21.528 "w_mbytes_per_sec": 0 00:14:21.528 }, 00:14:21.528 "claimed": false, 00:14:21.528 "zoned": false, 00:14:21.528 "supported_io_types": { 00:14:21.528 "read": true, 00:14:21.528 "write": true, 00:14:21.528 "unmap": true, 00:14:21.528 "write_zeroes": true, 00:14:21.528 "flush": false, 00:14:21.528 "reset": true, 00:14:21.528 "compare": false, 00:14:21.528 "compare_and_write": false, 00:14:21.528 "abort": false, 00:14:21.528 "nvme_admin": false, 00:14:21.528 "nvme_io": false 00:14:21.528 }, 00:14:21.528 "driver_specific": { 00:14:21.528 "lvol": { 00:14:21.529 "lvol_store_uuid": "8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e", 00:14:21.529 "base_bdev": "aio_bdev", 00:14:21.529 "thin_provision": false, 00:14:21.529 "snapshot": false, 00:14:21.529 "clone": false, 00:14:21.529 "esnap_clone": false 00:14:21.529 } 00:14:21.529 } 00:14:21.529 } 00:14:21.529 ] 00:14:21.529 04:05:35 -- common/autotest_common.sh@893 -- # return 0 00:14:21.529 04:05:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:21.529 04:05:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:21.786 04:05:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:21.786 04:05:36 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:21.786 04:05:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:22.044 04:05:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:22.044 04:05:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e1f5c2b1-c663-4126-bbd0-d1e9f809dc61 00:14:22.302 04:05:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ab8f4e2-58b1-49e7-b35f-20a1ac84a98e 00:14:22.559 04:05:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:22.817 04:05:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.817 00:14:22.817 real 0m18.380s 00:14:22.817 user 0m48.455s 00:14:22.817 sys 0m3.473s 00:14:22.817 04:05:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:22.817 04:05:37 -- common/autotest_common.sh@10 -- # set +x 00:14:22.817 ************************************ 00:14:22.817 END TEST lvs_grow_dirty 00:14:22.817 ************************************ 00:14:22.817 04:05:37 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:22.817 04:05:37 -- common/autotest_common.sh@794 -- # type=--id 00:14:22.817 04:05:37 -- common/autotest_common.sh@795 -- # id=0 00:14:22.817 04:05:37 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:22.817 04:05:37 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:22.817 04:05:37 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:22.817 04:05:37 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:22.817 04:05:37 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:22.817 04:05:37 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:22.817 nvmf_trace.0 00:14:22.817 04:05:37 -- common/autotest_common.sh@809 -- # return 0 00:14:22.817 04:05:37 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:22.817 04:05:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:22.817 04:05:37 -- nvmf/common.sh@117 -- # sync 00:14:22.817 04:05:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.817 04:05:37 -- nvmf/common.sh@120 -- # set +e 00:14:22.817 04:05:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.075 04:05:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.075 rmmod nvme_tcp 00:14:23.075 rmmod nvme_fabrics 00:14:23.075 rmmod nvme_keyring 00:14:23.075 04:05:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.075 04:05:37 -- nvmf/common.sh@124 -- # set -e 00:14:23.075 04:05:37 -- nvmf/common.sh@125 -- # return 0 00:14:23.075 04:05:37 -- nvmf/common.sh@478 -- # '[' -n 3784282 ']' 00:14:23.075 04:05:37 -- nvmf/common.sh@479 -- # killprocess 3784282 00:14:23.075 04:05:37 -- common/autotest_common.sh@936 -- # '[' -z 3784282 ']' 00:14:23.075 04:05:37 -- common/autotest_common.sh@940 -- # kill -0 3784282 00:14:23.075 04:05:37 -- common/autotest_common.sh@941 -- # uname 00:14:23.075 04:05:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:23.075 04:05:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3784282 00:14:23.075 04:05:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:23.075 04:05:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:23.075 04:05:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3784282' 00:14:23.075 killing process with pid 3784282 00:14:23.075 04:05:37 -- common/autotest_common.sh@955 -- # kill 3784282 00:14:23.075 04:05:37 -- common/autotest_common.sh@960 -- # wait 3784282 00:14:23.333 04:05:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:23.333 04:05:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:23.333 04:05:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:23.333 04:05:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.333 04:05:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.333 04:05:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.333 04:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.333 04:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.862 04:05:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.862 00:14:25.862 real 0m44.969s 00:14:25.862 user 1m11.593s 00:14:25.862 sys 0m9.862s 00:14:25.862 04:05:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.863 04:05:39 -- common/autotest_common.sh@10 -- # set +x 00:14:25.863 ************************************ 00:14:25.863 END TEST nvmf_lvs_grow 00:14:25.863 ************************************ 00:14:25.863 04:05:39 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:25.863 04:05:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.863 04:05:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.863 04:05:39 -- common/autotest_common.sh@10 -- # set +x 00:14:25.863 ************************************ 00:14:25.863 START TEST nvmf_bdev_io_wait 00:14:25.863 ************************************ 00:14:25.863 04:05:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:25.863 * Looking for test storage... 00:14:25.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.863 04:05:40 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.863 04:05:40 -- nvmf/common.sh@7 -- # uname -s 00:14:25.863 04:05:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.863 04:05:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.863 04:05:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.863 04:05:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.863 04:05:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.863 04:05:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.863 04:05:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.863 04:05:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.863 04:05:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.863 04:05:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.863 04:05:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:25.863 04:05:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:25.863 04:05:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.863 04:05:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.863 04:05:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.863 04:05:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.863 04:05:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.863 04:05:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.863 04:05:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.863 04:05:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.863 04:05:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.863 04:05:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.863 04:05:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.863 04:05:40 -- paths/export.sh@5 -- # export PATH 00:14:25.863 04:05:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.863 04:05:40 -- nvmf/common.sh@47 -- # : 0 00:14:25.863 04:05:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.863 04:05:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.863 04:05:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.863 04:05:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.863 04:05:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.863 04:05:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.863 04:05:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.863 04:05:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.863 04:05:40 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.863 04:05:40 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.863 04:05:40 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:25.863 04:05:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:25.863 04:05:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.863 04:05:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:25.863 04:05:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:25.863 04:05:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:25.863 04:05:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.863 04:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.863 04:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.863 04:05:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:25.863 04:05:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:25.863 04:05:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.863 04:05:40 -- common/autotest_common.sh@10 -- # set +x 00:14:31.175 04:05:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:31.175 04:05:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.175 04:05:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.175 04:05:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.175 04:05:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.175 04:05:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.175 04:05:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.175 04:05:45 -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.175 04:05:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.175 04:05:45 -- nvmf/common.sh@296 -- # e810=() 00:14:31.175 04:05:45 -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.175 04:05:45 -- nvmf/common.sh@297 -- # x722=() 00:14:31.175 04:05:45 -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.175 04:05:45 -- nvmf/common.sh@298 -- # mlx=() 00:14:31.175 04:05:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.175 04:05:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.175 04:05:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.175 04:05:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.175 04:05:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.175 04:05:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.175 04:05:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:31.175 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:31.175 04:05:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.175 04:05:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.176 04:05:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:31.176 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:31.176 04:05:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.176 04:05:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.176 04:05:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.176 04:05:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:31.176 04:05:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.176 04:05:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:31.176 Found net devices under 0000:af:00.0: cvl_0_0 00:14:31.176 04:05:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.176 04:05:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.176 04:05:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.176 04:05:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:31.176 04:05:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.176 04:05:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:31.176 Found net devices under 0000:af:00.1: cvl_0_1 00:14:31.176 04:05:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.176 04:05:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:31.176 04:05:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:31.176 04:05:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:31.176 04:05:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.176 04:05:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.176 04:05:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.176 04:05:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.176 04:05:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.176 04:05:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.176 04:05:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.176 04:05:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.176 04:05:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.176 04:05:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.176 04:05:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.176 04:05:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.176 04:05:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.176 04:05:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.176 04:05:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.176 04:05:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.176 04:05:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.176 04:05:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.176 04:05:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.176 04:05:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:31.176 00:14:31.176 --- 10.0.0.2 ping statistics --- 00:14:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.176 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:31.176 04:05:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:14:31.176 00:14:31.176 --- 10.0.0.1 ping statistics --- 00:14:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.176 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:14:31.176 04:05:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.176 04:05:45 -- nvmf/common.sh@411 -- # return 0 00:14:31.176 04:05:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:31.176 04:05:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.176 04:05:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:31.176 04:05:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.176 04:05:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:31.176 04:05:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:31.176 04:05:45 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:31.176 04:05:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:31.176 04:05:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:31.176 04:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.176 04:05:45 -- nvmf/common.sh@470 -- # nvmfpid=3788722 00:14:31.176 04:05:45 -- nvmf/common.sh@471 -- # waitforlisten 3788722 00:14:31.176 04:05:45 -- common/autotest_common.sh@817 -- # '[' -z 3788722 ']' 00:14:31.176 04:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.176 04:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.176 04:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.176 04:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.176 04:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:31.176 04:05:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:31.434 [2024-04-19 04:05:45.723242] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:31.434 [2024-04-19 04:05:45.723300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.434 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.434 [2024-04-19 04:05:45.809903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.434 [2024-04-19 04:05:45.902605] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.434 [2024-04-19 04:05:45.902649] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.434 [2024-04-19 04:05:45.902659] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.434 [2024-04-19 04:05:45.902668] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.434 [2024-04-19 04:05:45.902675] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.434 [2024-04-19 04:05:45.902717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.434 [2024-04-19 04:05:45.902817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.434 [2024-04-19 04:05:45.902930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.434 [2024-04-19 04:05:45.902931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.366 04:05:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.366 04:05:46 -- common/autotest_common.sh@850 -- # return 0 00:14:32.366 04:05:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:32.366 04:05:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:32.366 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.366 04:05:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.366 04:05:46 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:32.366 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.366 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.366 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.366 04:05:46 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:32.366 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.366 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.366 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.366 04:05:46 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.366 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.366 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 [2024-04-19 04:05:46.778254] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.367 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:32.367 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.367 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 Malloc0 00:14:32.367 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.367 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.367 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.367 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.367 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.367 04:05:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.367 04:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 [2024-04-19 04:05:46.849167] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.367 04:05:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3789002 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@30 -- # READ_PID=3789004 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # config=() 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # local subsystem config 00:14:32.367 04:05:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:32.367 { 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme$subsystem", 00:14:32.367 "trtype": "$TEST_TRANSPORT", 00:14:32.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "$NVMF_PORT", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.367 "hdgst": ${hdgst:-false}, 00:14:32.367 "ddgst": ${ddgst:-false} 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 } 00:14:32.367 EOF 00:14:32.367 )") 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3789006 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # config=() 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # local subsystem config 00:14:32.367 04:05:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:32.367 { 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme$subsystem", 00:14:32.367 "trtype": "$TEST_TRANSPORT", 00:14:32.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "$NVMF_PORT", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.367 "hdgst": ${hdgst:-false}, 00:14:32.367 "ddgst": ${ddgst:-false} 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 } 00:14:32.367 EOF 00:14:32.367 )") 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3789009 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # cat 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@35 -- # sync 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # config=() 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # local subsystem config 00:14:32.367 04:05:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:32.367 { 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme$subsystem", 00:14:32.367 "trtype": "$TEST_TRANSPORT", 00:14:32.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "$NVMF_PORT", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.367 "hdgst": ${hdgst:-false}, 00:14:32.367 "ddgst": ${ddgst:-false} 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 } 00:14:32.367 EOF 00:14:32.367 )") 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # config=() 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # cat 00:14:32.367 04:05:46 -- nvmf/common.sh@521 -- # local subsystem config 00:14:32.367 04:05:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:32.367 { 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme$subsystem", 00:14:32.367 "trtype": "$TEST_TRANSPORT", 00:14:32.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "$NVMF_PORT", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.367 "hdgst": ${hdgst:-false}, 00:14:32.367 "ddgst": ${ddgst:-false} 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 } 00:14:32.367 EOF 00:14:32.367 )") 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # cat 00:14:32.367 04:05:46 -- target/bdev_io_wait.sh@37 -- # wait 3789002 00:14:32.367 04:05:46 -- nvmf/common.sh@543 -- # cat 00:14:32.367 04:05:46 -- nvmf/common.sh@545 -- # jq . 00:14:32.367 04:05:46 -- nvmf/common.sh@545 -- # jq . 00:14:32.367 04:05:46 -- nvmf/common.sh@546 -- # IFS=, 00:14:32.367 04:05:46 -- nvmf/common.sh@545 -- # jq . 00:14:32.367 04:05:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme1", 00:14:32.367 "trtype": "tcp", 00:14:32.367 "traddr": "10.0.0.2", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "4420", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.367 "hdgst": false, 00:14:32.367 "ddgst": false 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 }' 00:14:32.367 04:05:46 -- nvmf/common.sh@545 -- # jq . 00:14:32.367 04:05:46 -- nvmf/common.sh@546 -- # IFS=, 00:14:32.367 04:05:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme1", 00:14:32.367 "trtype": "tcp", 00:14:32.367 "traddr": "10.0.0.2", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "4420", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.367 "hdgst": false, 00:14:32.367 "ddgst": false 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 }' 00:14:32.367 04:05:46 -- nvmf/common.sh@546 -- # IFS=, 00:14:32.367 04:05:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme1", 00:14:32.367 "trtype": "tcp", 00:14:32.367 "traddr": "10.0.0.2", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "4420", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.367 "hdgst": false, 00:14:32.367 "ddgst": false 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 }' 00:14:32.367 04:05:46 -- nvmf/common.sh@546 -- # IFS=, 00:14:32.367 04:05:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:32.367 "params": { 00:14:32.367 "name": "Nvme1", 00:14:32.367 "trtype": "tcp", 00:14:32.367 "traddr": "10.0.0.2", 00:14:32.367 "adrfam": "ipv4", 00:14:32.367 "trsvcid": "4420", 00:14:32.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.367 "hdgst": false, 00:14:32.367 "ddgst": false 00:14:32.367 }, 00:14:32.367 "method": "bdev_nvme_attach_controller" 00:14:32.367 }' 00:14:32.625 [2024-04-19 04:05:46.901810] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:32.625 [2024-04-19 04:05:46.901876] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:32.625 [2024-04-19 04:05:46.903556] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:32.625 [2024-04-19 04:05:46.903609] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:32.625 [2024-04-19 04:05:46.905460] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:32.625 [2024-04-19 04:05:46.905523] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:32.625 [2024-04-19 04:05:46.905552] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:32.625 [2024-04-19 04:05:46.905603] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:32.625 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.625 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.625 [2024-04-19 04:05:47.094582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.625 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.882 [2024-04-19 04:05:47.182649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:32.882 [2024-04-19 04:05:47.187373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.882 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.882 [2024-04-19 04:05:47.276053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:32.882 [2024-04-19 04:05:47.287055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.882 [2024-04-19 04:05:47.340081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.882 [2024-04-19 04:05:47.386796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:33.140 [2024-04-19 04:05:47.428865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:33.140 Running I/O for 1 seconds... 00:14:33.140 Running I/O for 1 seconds... 00:14:33.397 Running I/O for 1 seconds... 00:14:33.397 Running I/O for 1 seconds... 00:14:33.961 00:14:33.961 Latency(us) 00:14:33.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.961 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:33.961 Nvme1n1 : 1.01 8932.16 34.89 0.00 0.00 14265.03 7685.59 21209.83 00:14:33.961 =================================================================================================================== 00:14:33.961 Total : 8932.16 34.89 0.00 0.00 14265.03 7685.59 21209.83 00:14:34.219 04:05:48 -- target/bdev_io_wait.sh@38 -- # wait 3789004 00:14:34.219 00:14:34.219 Latency(us) 00:14:34.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.219 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:34.219 Nvme1n1 : 1.01 7395.35 28.89 0.00 0.00 17220.08 9770.82 26810.18 00:14:34.219 =================================================================================================================== 00:14:34.219 Total : 7395.35 28.89 0.00 0.00 17220.08 9770.82 26810.18 00:14:34.219 00:14:34.219 Latency(us) 00:14:34.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.219 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:34.219 Nvme1n1 : 1.00 168883.03 659.70 0.00 0.00 754.47 297.89 886.23 00:14:34.219 =================================================================================================================== 00:14:34.219 Total : 168883.03 659.70 0.00 0.00 754.47 297.89 886.23 00:14:34.219 00:14:34.219 Latency(us) 00:14:34.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.219 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:34.219 Nvme1n1 : 1.01 8127.32 31.75 0.00 0.00 15696.50 5779.08 29550.78 00:14:34.219 =================================================================================================================== 00:14:34.219 Total : 8127.32 31.75 0.00 0.00 15696.50 5779.08 29550.78 00:14:34.476 04:05:48 -- target/bdev_io_wait.sh@39 -- # wait 3789006 00:14:34.476 04:05:48 -- target/bdev_io_wait.sh@40 -- # wait 3789009 00:14:34.476 04:05:48 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.476 04:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.476 04:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:34.476 04:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.476 04:05:48 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:34.476 04:05:48 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:34.476 04:05:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:34.476 04:05:48 -- nvmf/common.sh@117 -- # sync 00:14:34.476 04:05:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.476 04:05:48 -- nvmf/common.sh@120 -- # set +e 00:14:34.476 04:05:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.476 04:05:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.476 rmmod nvme_tcp 00:14:34.476 rmmod nvme_fabrics 00:14:34.476 rmmod nvme_keyring 00:14:34.476 04:05:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.476 04:05:48 -- nvmf/common.sh@124 -- # set -e 00:14:34.476 04:05:48 -- nvmf/common.sh@125 -- # return 0 00:14:34.476 04:05:48 -- nvmf/common.sh@478 -- # '[' -n 3788722 ']' 00:14:34.476 04:05:48 -- nvmf/common.sh@479 -- # killprocess 3788722 00:14:34.476 04:05:48 -- common/autotest_common.sh@936 -- # '[' -z 3788722 ']' 00:14:34.476 04:05:48 -- common/autotest_common.sh@940 -- # kill -0 3788722 00:14:34.477 04:05:49 -- common/autotest_common.sh@941 -- # uname 00:14:34.735 04:05:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.735 04:05:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3788722 00:14:34.735 04:05:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.735 04:05:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.735 04:05:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3788722' 00:14:34.735 killing process with pid 3788722 00:14:34.735 04:05:49 -- common/autotest_common.sh@955 -- # kill 3788722 00:14:34.735 04:05:49 -- common/autotest_common.sh@960 -- # wait 3788722 00:14:34.993 04:05:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:34.993 04:05:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:34.993 04:05:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:34.993 04:05:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.993 04:05:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.993 04:05:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.993 04:05:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.993 04:05:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.892 04:05:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.892 00:14:36.892 real 0m11.394s 00:14:36.892 user 0m20.841s 00:14:36.892 sys 0m5.917s 00:14:36.893 04:05:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.893 04:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.893 ************************************ 00:14:36.893 END TEST nvmf_bdev_io_wait 00:14:36.893 ************************************ 00:14:36.893 04:05:51 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:36.893 04:05:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.893 04:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.893 04:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:37.151 ************************************ 00:14:37.151 START TEST nvmf_queue_depth 00:14:37.151 ************************************ 00:14:37.151 04:05:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:37.151 * Looking for test storage... 00:14:37.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.151 04:05:51 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.151 04:05:51 -- nvmf/common.sh@7 -- # uname -s 00:14:37.151 04:05:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.151 04:05:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.151 04:05:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.151 04:05:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.151 04:05:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.151 04:05:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.151 04:05:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.151 04:05:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.151 04:05:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.151 04:05:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.151 04:05:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:37.151 04:05:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:37.151 04:05:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.151 04:05:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.151 04:05:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.151 04:05:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.151 04:05:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.151 04:05:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.151 04:05:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.151 04:05:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.151 04:05:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.151 04:05:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.151 04:05:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.151 04:05:51 -- paths/export.sh@5 -- # export PATH 00:14:37.151 04:05:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.151 04:05:51 -- nvmf/common.sh@47 -- # : 0 00:14:37.151 04:05:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.151 04:05:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.151 04:05:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.151 04:05:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.151 04:05:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.151 04:05:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.151 04:05:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.151 04:05:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.151 04:05:51 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:37.151 04:05:51 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:37.151 04:05:51 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.151 04:05:51 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:37.151 04:05:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:37.151 04:05:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.151 04:05:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:37.151 04:05:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:37.151 04:05:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:37.151 04:05:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.151 04:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.151 04:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.151 04:05:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:37.151 04:05:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:37.151 04:05:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.151 04:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:43.713 04:05:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.713 04:05:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.713 04:05:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.713 04:05:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.713 04:05:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.713 04:05:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.713 04:05:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.713 04:05:57 -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.713 04:05:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.713 04:05:57 -- nvmf/common.sh@296 -- # e810=() 00:14:43.713 04:05:57 -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.713 04:05:57 -- nvmf/common.sh@297 -- # x722=() 00:14:43.713 04:05:57 -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.713 04:05:57 -- nvmf/common.sh@298 -- # mlx=() 00:14:43.713 04:05:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.713 04:05:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.713 04:05:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.713 04:05:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:43.713 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:43.713 04:05:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.713 04:05:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:43.713 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:43.713 04:05:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.713 04:05:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.713 04:05:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.713 04:05:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:43.713 Found net devices under 0000:af:00.0: cvl_0_0 00:14:43.713 04:05:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.713 04:05:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.713 04:05:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.713 04:05:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:43.713 Found net devices under 0000:af:00.1: cvl_0_1 00:14:43.713 04:05:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:43.713 04:05:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:43.713 04:05:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:43.713 04:05:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.713 04:05:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.713 04:05:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.713 04:05:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.713 04:05:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.713 04:05:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.713 04:05:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.713 04:05:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.713 04:05:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.713 04:05:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.713 04:05:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.713 04:05:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.713 04:05:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.713 04:05:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.713 04:05:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.713 04:05:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.713 04:05:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.713 04:05:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.713 04:05:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:14:43.713 00:14:43.713 --- 10.0.0.2 ping statistics --- 00:14:43.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.713 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:14:43.713 04:05:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:43.713 00:14:43.713 --- 10.0.0.1 ping statistics --- 00:14:43.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.713 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:43.713 04:05:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.713 04:05:57 -- nvmf/common.sh@411 -- # return 0 00:14:43.714 04:05:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:43.714 04:05:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.714 04:05:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:43.714 04:05:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:43.714 04:05:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.714 04:05:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:43.714 04:05:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:43.714 04:05:57 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:43.714 04:05:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:43.714 04:05:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 04:05:57 -- nvmf/common.sh@470 -- # nvmfpid=3793030 00:14:43.714 04:05:57 -- nvmf/common.sh@471 -- # waitforlisten 3793030 00:14:43.714 04:05:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.714 04:05:57 -- common/autotest_common.sh@817 -- # '[' -z 3793030 ']' 00:14:43.714 04:05:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.714 04:05:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.714 04:05:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.714 04:05:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 [2024-04-19 04:05:57.403172] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:43.714 [2024-04-19 04:05:57.403226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.714 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.714 [2024-04-19 04:05:57.483521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.714 [2024-04-19 04:05:57.571465] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.714 [2024-04-19 04:05:57.571509] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.714 [2024-04-19 04:05:57.571519] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.714 [2024-04-19 04:05:57.571528] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.714 [2024-04-19 04:05:57.571535] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.714 [2024-04-19 04:05:57.571562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.714 04:05:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.714 04:05:57 -- common/autotest_common.sh@850 -- # return 0 00:14:43.714 04:05:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.714 04:05:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 04:05:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.714 04:05:57 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.714 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 [2024-04-19 04:05:57.710059] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.714 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:57 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.714 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 Malloc0 00:14:43.714 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:57 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.714 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:57 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.714 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:57 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.714 04:05:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 [2024-04-19 04:05:57.775046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.714 04:05:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:57 -- target/queue_depth.sh@30 -- # bdevperf_pid=3793049 00:14:43.714 04:05:57 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.714 04:05:57 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:43.714 04:05:57 -- target/queue_depth.sh@33 -- # waitforlisten 3793049 /var/tmp/bdevperf.sock 00:14:43.714 04:05:57 -- common/autotest_common.sh@817 -- # '[' -z 3793049 ']' 00:14:43.714 04:05:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.714 04:05:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.714 04:05:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.714 04:05:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.714 04:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 [2024-04-19 04:05:57.826206] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:14:43.714 [2024-04-19 04:05:57.826263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793049 ] 00:14:43.714 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.714 [2024-04-19 04:05:57.904939] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.714 [2024-04-19 04:05:57.994556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.714 04:05:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.714 04:05:58 -- common/autotest_common.sh@850 -- # return 0 00:14:43.714 04:05:58 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:43.714 04:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.714 04:05:58 -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 NVMe0n1 00:14:43.714 04:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.714 04:05:58 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.972 Running I/O for 10 seconds... 00:14:53.942 00:14:53.942 Latency(us) 00:14:53.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.942 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:53.942 Verification LBA range: start 0x0 length 0x4000 00:14:53.942 NVMe0n1 : 10.10 7997.98 31.24 0.00 0.00 127458.78 29193.31 81026.33 00:14:53.942 =================================================================================================================== 00:14:53.942 Total : 7997.98 31.24 0.00 0.00 127458.78 29193.31 81026.33 00:14:53.942 0 00:14:53.942 04:06:08 -- target/queue_depth.sh@39 -- # killprocess 3793049 00:14:53.942 04:06:08 -- common/autotest_common.sh@936 -- # '[' -z 3793049 ']' 00:14:53.942 04:06:08 -- common/autotest_common.sh@940 -- # kill -0 3793049 00:14:53.942 04:06:08 -- common/autotest_common.sh@941 -- # uname 00:14:53.942 04:06:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.942 04:06:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3793049 00:14:54.200 04:06:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:54.200 04:06:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:54.200 04:06:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3793049' 00:14:54.200 killing process with pid 3793049 00:14:54.200 04:06:08 -- common/autotest_common.sh@955 -- # kill 3793049 00:14:54.200 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.200 00:14:54.200 Latency(us) 00:14:54.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.200 =================================================================================================================== 00:14:54.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.200 04:06:08 -- common/autotest_common.sh@960 -- # wait 3793049 00:14:54.200 04:06:08 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:54.200 04:06:08 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:54.200 04:06:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:54.200 04:06:08 -- nvmf/common.sh@117 -- # sync 00:14:54.200 04:06:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.200 04:06:08 -- nvmf/common.sh@120 -- # set +e 00:14:54.200 04:06:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.200 04:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.200 rmmod nvme_tcp 00:14:54.459 rmmod nvme_fabrics 00:14:54.459 rmmod nvme_keyring 00:14:54.459 04:06:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.459 04:06:08 -- nvmf/common.sh@124 -- # set -e 00:14:54.459 04:06:08 -- nvmf/common.sh@125 -- # return 0 00:14:54.459 04:06:08 -- nvmf/common.sh@478 -- # '[' -n 3793030 ']' 00:14:54.459 04:06:08 -- nvmf/common.sh@479 -- # killprocess 3793030 00:14:54.459 04:06:08 -- common/autotest_common.sh@936 -- # '[' -z 3793030 ']' 00:14:54.459 04:06:08 -- common/autotest_common.sh@940 -- # kill -0 3793030 00:14:54.459 04:06:08 -- common/autotest_common.sh@941 -- # uname 00:14:54.459 04:06:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.459 04:06:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3793030 00:14:54.459 04:06:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:54.459 04:06:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:54.459 04:06:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3793030' 00:14:54.459 killing process with pid 3793030 00:14:54.459 04:06:08 -- common/autotest_common.sh@955 -- # kill 3793030 00:14:54.459 04:06:08 -- common/autotest_common.sh@960 -- # wait 3793030 00:14:54.718 04:06:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:54.718 04:06:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:54.718 04:06:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:54.718 04:06:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.718 04:06:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.718 04:06:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.718 04:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.718 04:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.622 04:06:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.622 00:14:56.622 real 0m19.627s 00:14:56.622 user 0m23.572s 00:14:56.622 sys 0m5.646s 00:14:56.622 04:06:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.622 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:56.622 ************************************ 00:14:56.622 END TEST nvmf_queue_depth 00:14:56.622 ************************************ 00:14:56.880 04:06:11 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:56.880 04:06:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.880 04:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.880 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:56.880 ************************************ 00:14:56.880 START TEST nvmf_multipath 00:14:56.880 ************************************ 00:14:56.880 04:06:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:56.880 * Looking for test storage... 00:14:57.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.139 04:06:11 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.139 04:06:11 -- nvmf/common.sh@7 -- # uname -s 00:14:57.139 04:06:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.139 04:06:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.139 04:06:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.139 04:06:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.139 04:06:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.139 04:06:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.139 04:06:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.139 04:06:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.139 04:06:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.139 04:06:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.139 04:06:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:57.139 04:06:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:57.139 04:06:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.139 04:06:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.139 04:06:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.139 04:06:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.139 04:06:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.139 04:06:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.139 04:06:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.139 04:06:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.139 04:06:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.139 04:06:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.139 04:06:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.139 04:06:11 -- paths/export.sh@5 -- # export PATH 00:14:57.139 04:06:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.139 04:06:11 -- nvmf/common.sh@47 -- # : 0 00:14:57.139 04:06:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.139 04:06:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.139 04:06:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.139 04:06:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.139 04:06:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.139 04:06:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.139 04:06:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.139 04:06:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.139 04:06:11 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.139 04:06:11 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.139 04:06:11 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:57.139 04:06:11 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.139 04:06:11 -- target/multipath.sh@43 -- # nvmftestinit 00:14:57.139 04:06:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:57.139 04:06:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.139 04:06:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:57.139 04:06:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:57.139 04:06:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:57.139 04:06:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.139 04:06:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.139 04:06:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.139 04:06:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:57.139 04:06:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:57.139 04:06:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.139 04:06:11 -- common/autotest_common.sh@10 -- # set +x 00:15:02.400 04:06:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.400 04:06:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.400 04:06:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.400 04:06:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.400 04:06:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.400 04:06:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.400 04:06:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.400 04:06:16 -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.400 04:06:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.400 04:06:16 -- nvmf/common.sh@296 -- # e810=() 00:15:02.400 04:06:16 -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.400 04:06:16 -- nvmf/common.sh@297 -- # x722=() 00:15:02.400 04:06:16 -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.400 04:06:16 -- nvmf/common.sh@298 -- # mlx=() 00:15:02.400 04:06:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.400 04:06:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.400 04:06:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.400 04:06:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.400 04:06:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.400 04:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:02.400 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:02.400 04:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.400 04:06:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:02.400 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:02.400 04:06:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.400 04:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.400 04:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.400 04:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:02.400 Found net devices under 0000:af:00.0: cvl_0_0 00:15:02.400 04:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.400 04:06:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.400 04:06:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.400 04:06:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.400 04:06:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:02.400 Found net devices under 0000:af:00.1: cvl_0_1 00:15:02.400 04:06:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.400 04:06:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:02.400 04:06:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:02.400 04:06:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:02.400 04:06:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.400 04:06:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.400 04:06:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.400 04:06:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.400 04:06:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.400 04:06:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.400 04:06:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.400 04:06:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.400 04:06:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.401 04:06:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.401 04:06:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.401 04:06:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.401 04:06:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.401 04:06:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.401 04:06:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.401 04:06:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.401 04:06:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.681 04:06:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.681 04:06:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.681 04:06:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:15:02.681 00:15:02.681 --- 10.0.0.2 ping statistics --- 00:15:02.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.681 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:02.681 04:06:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:15:02.681 00:15:02.681 --- 10.0.0.1 ping statistics --- 00:15:02.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.681 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:15:02.681 04:06:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.681 04:06:17 -- nvmf/common.sh@411 -- # return 0 00:15:02.681 04:06:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:02.681 04:06:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.681 04:06:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:02.681 04:06:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:02.681 04:06:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.681 04:06:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:02.681 04:06:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:02.681 04:06:17 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:02.681 04:06:17 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:02.681 only one NIC for nvmf test 00:15:02.681 04:06:17 -- target/multipath.sh@47 -- # nvmftestfini 00:15:02.681 04:06:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:02.681 04:06:17 -- nvmf/common.sh@117 -- # sync 00:15:02.681 04:06:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.681 04:06:17 -- nvmf/common.sh@120 -- # set +e 00:15:02.681 04:06:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.681 04:06:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.681 rmmod nvme_tcp 00:15:02.681 rmmod nvme_fabrics 00:15:02.681 rmmod nvme_keyring 00:15:02.681 04:06:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.681 04:06:17 -- nvmf/common.sh@124 -- # set -e 00:15:02.681 04:06:17 -- nvmf/common.sh@125 -- # return 0 00:15:02.681 04:06:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:02.681 04:06:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:02.681 04:06:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:02.682 04:06:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:02.682 04:06:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.682 04:06:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.682 04:06:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.682 04:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.682 04:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.220 04:06:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.220 04:06:19 -- target/multipath.sh@48 -- # exit 0 00:15:05.220 04:06:19 -- target/multipath.sh@1 -- # nvmftestfini 00:15:05.220 04:06:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:05.220 04:06:19 -- nvmf/common.sh@117 -- # sync 00:15:05.220 04:06:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.220 04:06:19 -- nvmf/common.sh@120 -- # set +e 00:15:05.220 04:06:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.220 04:06:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.220 04:06:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.220 04:06:19 -- nvmf/common.sh@124 -- # set -e 00:15:05.220 04:06:19 -- nvmf/common.sh@125 -- # return 0 00:15:05.220 04:06:19 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:05.220 04:06:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:05.220 04:06:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:05.220 04:06:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:05.220 04:06:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.220 04:06:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.220 04:06:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.220 04:06:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.220 04:06:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.220 04:06:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.220 00:15:05.220 real 0m7.898s 00:15:05.220 user 0m1.568s 00:15:05.220 sys 0m4.314s 00:15:05.220 04:06:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.220 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:05.220 ************************************ 00:15:05.220 END TEST nvmf_multipath 00:15:05.220 ************************************ 00:15:05.220 04:06:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:05.220 04:06:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:05.220 04:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.220 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:05.220 ************************************ 00:15:05.220 START TEST nvmf_zcopy 00:15:05.220 ************************************ 00:15:05.220 04:06:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:05.220 * Looking for test storage... 00:15:05.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.220 04:06:19 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.220 04:06:19 -- nvmf/common.sh@7 -- # uname -s 00:15:05.220 04:06:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.220 04:06:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.220 04:06:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.220 04:06:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.220 04:06:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.220 04:06:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.220 04:06:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.220 04:06:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.220 04:06:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.220 04:06:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.220 04:06:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:05.220 04:06:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:05.220 04:06:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.220 04:06:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.220 04:06:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.220 04:06:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.220 04:06:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.220 04:06:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.220 04:06:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.220 04:06:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.220 04:06:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.221 04:06:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.221 04:06:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.221 04:06:19 -- paths/export.sh@5 -- # export PATH 00:15:05.221 04:06:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.221 04:06:19 -- nvmf/common.sh@47 -- # : 0 00:15:05.221 04:06:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.221 04:06:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.221 04:06:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.221 04:06:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.221 04:06:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.221 04:06:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.221 04:06:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.221 04:06:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.221 04:06:19 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:05.221 04:06:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:05.221 04:06:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.221 04:06:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:05.221 04:06:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:05.221 04:06:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:05.221 04:06:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.221 04:06:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.221 04:06:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.221 04:06:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:05.221 04:06:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:05.221 04:06:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.221 04:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:10.489 04:06:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:10.489 04:06:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:10.489 04:06:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:10.489 04:06:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:10.489 04:06:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:10.489 04:06:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:10.489 04:06:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:10.489 04:06:24 -- nvmf/common.sh@295 -- # net_devs=() 00:15:10.489 04:06:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:10.489 04:06:24 -- nvmf/common.sh@296 -- # e810=() 00:15:10.489 04:06:24 -- nvmf/common.sh@296 -- # local -ga e810 00:15:10.489 04:06:24 -- nvmf/common.sh@297 -- # x722=() 00:15:10.489 04:06:24 -- nvmf/common.sh@297 -- # local -ga x722 00:15:10.489 04:06:24 -- nvmf/common.sh@298 -- # mlx=() 00:15:10.489 04:06:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:10.489 04:06:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.489 04:06:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:10.489 04:06:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:10.489 04:06:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.489 04:06:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:10.489 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:10.489 04:06:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:10.489 04:06:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:10.489 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:10.489 04:06:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.489 04:06:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.489 04:06:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.489 04:06:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:10.489 Found net devices under 0000:af:00.0: cvl_0_0 00:15:10.489 04:06:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.489 04:06:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:10.489 04:06:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.489 04:06:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.489 04:06:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:10.489 Found net devices under 0000:af:00.1: cvl_0_1 00:15:10.489 04:06:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.489 04:06:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:10.489 04:06:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:10.489 04:06:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:10.489 04:06:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.489 04:06:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.489 04:06:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.489 04:06:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:10.489 04:06:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.489 04:06:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.489 04:06:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:10.489 04:06:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.489 04:06:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.489 04:06:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:10.489 04:06:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:10.489 04:06:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.489 04:06:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.748 04:06:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.748 04:06:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.748 04:06:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:10.748 04:06:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.748 04:06:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.748 04:06:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.748 04:06:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:10.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:15:10.748 00:15:10.748 --- 10.0.0.2 ping statistics --- 00:15:10.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.748 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:10.748 04:06:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:15:10.748 00:15:10.748 --- 10.0.0.1 ping statistics --- 00:15:10.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.748 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:10.748 04:06:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.748 04:06:25 -- nvmf/common.sh@411 -- # return 0 00:15:10.748 04:06:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:10.748 04:06:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.748 04:06:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:10.748 04:06:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:10.748 04:06:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.748 04:06:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:10.748 04:06:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:10.748 04:06:25 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:10.748 04:06:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:10.748 04:06:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:10.748 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:10.748 04:06:25 -- nvmf/common.sh@470 -- # nvmfpid=3802838 00:15:10.748 04:06:25 -- nvmf/common.sh@471 -- # waitforlisten 3802838 00:15:10.748 04:06:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:10.748 04:06:25 -- common/autotest_common.sh@817 -- # '[' -z 3802838 ']' 00:15:10.748 04:06:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.748 04:06:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.748 04:06:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.748 04:06:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.748 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:10.748 [2024-04-19 04:06:25.271950] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:10.748 [2024-04-19 04:06:25.272012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.014 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.014 [2024-04-19 04:06:25.352779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.014 [2024-04-19 04:06:25.438912] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.014 [2024-04-19 04:06:25.438956] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.014 [2024-04-19 04:06:25.438967] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.014 [2024-04-19 04:06:25.438976] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.014 [2024-04-19 04:06:25.438984] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.014 [2024-04-19 04:06:25.439010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.014 04:06:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.014 04:06:25 -- common/autotest_common.sh@850 -- # return 0 00:15:11.014 04:06:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:11.014 04:06:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:11.014 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 04:06:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.283 04:06:25 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:11.283 04:06:25 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 [2024-04-19 04:06:25.574156] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 [2024-04-19 04:06:25.594355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 malloc0 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:11.283 04:06:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:11.283 04:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:11.283 04:06:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:11.283 04:06:25 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:11.283 04:06:25 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:11.283 04:06:25 -- nvmf/common.sh@521 -- # config=() 00:15:11.283 04:06:25 -- nvmf/common.sh@521 -- # local subsystem config 00:15:11.283 04:06:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:11.283 04:06:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:11.283 { 00:15:11.283 "params": { 00:15:11.283 "name": "Nvme$subsystem", 00:15:11.283 "trtype": "$TEST_TRANSPORT", 00:15:11.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:11.283 "adrfam": "ipv4", 00:15:11.283 "trsvcid": "$NVMF_PORT", 00:15:11.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:11.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:11.283 "hdgst": ${hdgst:-false}, 00:15:11.283 "ddgst": ${ddgst:-false} 00:15:11.283 }, 00:15:11.283 "method": "bdev_nvme_attach_controller" 00:15:11.283 } 00:15:11.283 EOF 00:15:11.283 )") 00:15:11.283 04:06:25 -- nvmf/common.sh@543 -- # cat 00:15:11.283 04:06:25 -- nvmf/common.sh@545 -- # jq . 00:15:11.283 04:06:25 -- nvmf/common.sh@546 -- # IFS=, 00:15:11.283 04:06:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:11.283 "params": { 00:15:11.283 "name": "Nvme1", 00:15:11.283 "trtype": "tcp", 00:15:11.283 "traddr": "10.0.0.2", 00:15:11.283 "adrfam": "ipv4", 00:15:11.283 "trsvcid": "4420", 00:15:11.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.283 "hdgst": false, 00:15:11.283 "ddgst": false 00:15:11.283 }, 00:15:11.283 "method": "bdev_nvme_attach_controller" 00:15:11.283 }' 00:15:11.283 [2024-04-19 04:06:25.684671] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:11.283 [2024-04-19 04:06:25.684726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802874 ] 00:15:11.283 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.283 [2024-04-19 04:06:25.766150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.542 [2024-04-19 04:06:25.852271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.800 Running I/O for 10 seconds... 00:15:21.786 00:15:21.786 Latency(us) 00:15:21.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.786 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:21.786 Verification LBA range: start 0x0 length 0x1000 00:15:21.786 Nvme1n1 : 10.01 5646.41 44.11 0.00 0.00 22598.37 2770.39 31933.91 00:15:21.786 =================================================================================================================== 00:15:21.786 Total : 5646.41 44.11 0.00 0.00 22598.37 2770.39 31933.91 00:15:22.044 04:06:36 -- target/zcopy.sh@39 -- # perfpid=3804770 00:15:22.044 04:06:36 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:22.045 04:06:36 -- common/autotest_common.sh@10 -- # set +x 00:15:22.045 04:06:36 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:22.045 04:06:36 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:22.045 04:06:36 -- nvmf/common.sh@521 -- # config=() 00:15:22.045 04:06:36 -- nvmf/common.sh@521 -- # local subsystem config 00:15:22.045 04:06:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:22.045 04:06:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:22.045 { 00:15:22.045 "params": { 00:15:22.045 "name": "Nvme$subsystem", 00:15:22.045 "trtype": "$TEST_TRANSPORT", 00:15:22.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:22.045 "adrfam": "ipv4", 00:15:22.045 "trsvcid": "$NVMF_PORT", 00:15:22.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:22.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:22.045 "hdgst": ${hdgst:-false}, 00:15:22.045 "ddgst": ${ddgst:-false} 00:15:22.045 }, 00:15:22.045 "method": "bdev_nvme_attach_controller" 00:15:22.045 } 00:15:22.045 EOF 00:15:22.045 )") 00:15:22.045 04:06:36 -- nvmf/common.sh@543 -- # cat 00:15:22.045 [2024-04-19 04:06:36.449104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.449143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 04:06:36 -- nvmf/common.sh@545 -- # jq . 00:15:22.045 04:06:36 -- nvmf/common.sh@546 -- # IFS=, 00:15:22.045 04:06:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:22.045 "params": { 00:15:22.045 "name": "Nvme1", 00:15:22.045 "trtype": "tcp", 00:15:22.045 "traddr": "10.0.0.2", 00:15:22.045 "adrfam": "ipv4", 00:15:22.045 "trsvcid": "4420", 00:15:22.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.045 "hdgst": false, 00:15:22.045 "ddgst": false 00:15:22.045 }, 00:15:22.045 "method": "bdev_nvme_attach_controller" 00:15:22.045 }' 00:15:22.045 [2024-04-19 04:06:36.461107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.461124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.469123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.469136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.481157] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.481172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.487953] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:22.045 [2024-04-19 04:06:36.488006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804770 ] 00:15:22.045 [2024-04-19 04:06:36.493186] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.493201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.505221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.505235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.517257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.517270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.045 [2024-04-19 04:06:36.529287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.529300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.541323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.541337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.553356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.553382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.565390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.045 [2024-04-19 04:06:36.565408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.045 [2024-04-19 04:06:36.568388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.306 [2024-04-19 04:06:36.577425] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.577441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.589456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.589471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.601491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.601505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.613522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.613542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.625557] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.625575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.637592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.637605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.649624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.649638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.656892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.306 [2024-04-19 04:06:36.661661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.661676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.673701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.673724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.685730] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.685746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.697764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.697780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.709800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.709815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.721838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.721853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.733862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.733876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.745927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.745952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.757945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.757964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.769975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.769994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.782006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.782026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.794032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.794046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.806069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.806083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.818110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.818130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.306 [2024-04-19 04:06:36.830139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.306 [2024-04-19 04:06:36.830156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-04-19 04:06:36.842177] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-04-19 04:06:36.842193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-04-19 04:06:36.854208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.565 [2024-04-19 04:06:36.854221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.565 [2024-04-19 04:06:36.866252] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.866270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.878296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.878313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.890315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.890329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.902353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.902366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.914391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.914408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.926423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.926436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.938454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.938467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.950496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.950510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.962521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.962535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:36.974564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.974587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 Running I/O for 5 seconds... 00:15:22.566 [2024-04-19 04:06:36.991710] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:36.991734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:37.008430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:37.008455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:37.025300] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:37.025324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:37.042050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:37.042073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:37.058635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:37.058659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.566 [2024-04-19 04:06:37.076358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.566 [2024-04-19 04:06:37.076382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.092932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.092957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.109324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.109355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.127541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.127567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.143487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.143510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.154229] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.154252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.165588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.165613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.181209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.181233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.192528] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.192551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.207893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.207916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.224522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.224545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.824 [2024-04-19 04:06:37.241451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.824 [2024-04-19 04:06:37.241475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.257948] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.257972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.273768] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.273792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.290231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.290255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.306962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.306986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.323307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.323330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.825 [2024-04-19 04:06:37.339499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.825 [2024-04-19 04:06:37.339522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.083 [2024-04-19 04:06:37.358049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.083 [2024-04-19 04:06:37.358074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.373765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.373788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.390681] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.390705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.407101] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.407126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.423726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.423750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.440424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.440448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.457399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.457424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.474287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.474311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.490733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.490757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.508085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.508109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.525716] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.525740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.542776] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.542799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.558834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.558858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.569892] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.569916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.585384] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.585407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.084 [2024-04-19 04:06:37.602390] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.084 [2024-04-19 04:06:37.602414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.617883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.617906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.634581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.634605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.652031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.652055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.668725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.668749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.684077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.684101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.701207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.701231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.718102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.718127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.734253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.734277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.747292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.747315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.757037] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.757061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.772844] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.772867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.790568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.790593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.806407] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.806432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.822230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.822254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.838998] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.839023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.342 [2024-04-19 04:06:37.855494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.342 [2024-04-19 04:06:37.855518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.871737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.871762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.882670] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.882694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.898075] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.898098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.915642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.915671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.932208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.932233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.949165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.949188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.965270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.965293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.983000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.983025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:37.999074] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:37.999098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.009753] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.009777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.021065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.021088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.038365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.038389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.056154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.056179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.071489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.071513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.088108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.088131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.104551] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.104574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.601 [2024-04-19 04:06:38.121659] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.601 [2024-04-19 04:06:38.121683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.139454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.860 [2024-04-19 04:06:38.139479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.155295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.860 [2024-04-19 04:06:38.155319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.166087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.860 [2024-04-19 04:06:38.166111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.181636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.860 [2024-04-19 04:06:38.181660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.192424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.860 [2024-04-19 04:06:38.192448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.860 [2024-04-19 04:06:38.207546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.207575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.218541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.218564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.234120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.234142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.249795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.249819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.260938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.260961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.272153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.272175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.288835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.288858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.305673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.305697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.321241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.321267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.338052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.338076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.353867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.353890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.364925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.364949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.861 [2024-04-19 04:06:38.380634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:23.861 [2024-04-19 04:06:38.380659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-04-19 04:06:38.397151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-04-19 04:06:38.397175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-04-19 04:06:38.413839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.119 [2024-04-19 04:06:38.413863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.119 [2024-04-19 04:06:38.431399] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.431423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.447467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.447490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.463832] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.463856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.475114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.475138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.486643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.486671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.502162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.502185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.519001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.519024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.535249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.535272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.551649] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.551673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.567577] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.567601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.583935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.583959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.600758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.600782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.618138] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.618162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.120 [2024-04-19 04:06:38.634740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.120 [2024-04-19 04:06:38.634764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.651114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.651137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.668462] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.668485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.685198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.685222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.700509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.700533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.716928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.716952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.733623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.733647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.750257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.750281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.766006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.766030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.776919] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.776942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.792234] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.792263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.808928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.808952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.825332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.825362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.842354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.378 [2024-04-19 04:06:38.842378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.378 [2024-04-19 04:06:38.859578] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.379 [2024-04-19 04:06:38.859603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.379 [2024-04-19 04:06:38.876598] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.379 [2024-04-19 04:06:38.876622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.379 [2024-04-19 04:06:38.894232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.379 [2024-04-19 04:06:38.894256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.909797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.909822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.925617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.925641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.936608] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.936631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.952109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.952133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.969774] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.969798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:38.986104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:38.986127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.004150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:39.004174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.019720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:39.019744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.030485] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:39.030508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.046107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:39.046131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.057106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.637 [2024-04-19 04:06:39.057130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.637 [2024-04-19 04:06:39.072614] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.072638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-04-19 04:06:39.089154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.089177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-04-19 04:06:39.106049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.106073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-04-19 04:06:39.123168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.123191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-04-19 04:06:39.139941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.139964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.638 [2024-04-19 04:06:39.156151] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.638 [2024-04-19 04:06:39.156174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.173826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.173850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.190440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.190464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.206430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.206454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.223024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.223052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.240533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.240558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.257165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.257190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.272814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.272839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.289521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.289546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.305929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.305952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.323011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.323035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.339204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.339228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.356107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.356131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.373617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.373642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.390084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.390109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.407052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.407077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.896 [2024-04-19 04:06:39.423586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.896 [2024-04-19 04:06:39.423610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.441051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.441074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.457874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.457898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.475146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.475169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.491654] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.491678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.508057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.508081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.524878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.524902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.541568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.541591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.558231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.558256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.575148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.575172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.592108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.592132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.609187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.609211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.625841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.625873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.642376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.642400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.658552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.658575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.155 [2024-04-19 04:06:39.675488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.155 [2024-04-19 04:06:39.675511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.692063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.692088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.708876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.708901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.725536] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.725560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.742139] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.742163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.759693] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.759717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.775733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.775756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.786963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.786985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.802326] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.802359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.812488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.812511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.828082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.828106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.844775] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.844799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.862270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.862294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.878072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.878096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.889071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.889095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.904491] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.904514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.922176] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.922199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.414 [2024-04-19 04:06:39.938031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.414 [2024-04-19 04:06:39.938055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.672 [2024-04-19 04:06:39.948970] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.672 [2024-04-19 04:06:39.948994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:39.964690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:39.964713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:39.981442] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:39.981467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:39.997672] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:39.997696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.014764] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.014832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.031441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.031466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.047120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.047146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.057256] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.057280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.069424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.069449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.084927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.084952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.102573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.102597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.119059] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.119083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.135409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.135434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.151841] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.151866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.169067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.169091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.673 [2024-04-19 04:06:40.186098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.673 [2024-04-19 04:06:40.186123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.202930] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.202954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.219315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.219339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.230290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.230314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.245639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.245663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.262056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.262080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.278216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.278240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.288474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.288503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.303673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.303697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.320328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.320363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.336849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.336874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.353327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.353359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.370926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.370950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.387876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.387901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.404259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.404283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.420199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.420224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.437606] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.437631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.932 [2024-04-19 04:06:40.453673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.932 [2024-04-19 04:06:40.453698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.469543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.469569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.486207] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.486231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.502623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.502647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.519248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.519272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.535607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.535632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.546511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.546534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.562029] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.562055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.579918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.191 [2024-04-19 04:06:40.579942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.191 [2024-04-19 04:06:40.596211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.596240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.615184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.615212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.630311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.630335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.640334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.640364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.655859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.655883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.671432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.671456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.682596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.682619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.698055] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.698078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.192 [2024-04-19 04:06:40.714498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.192 [2024-04-19 04:06:40.714522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.724673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.724699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.739835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.739860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.756281] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.756306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.772419] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.772443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.783163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.783187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.798591] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.798614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.815503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.815527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.832413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.832436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.849715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.849740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.866719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.866742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.883589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.883618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.899189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.899215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.910116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.910139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.926453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.926478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.942985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.943010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.959129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.959153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.451 [2024-04-19 04:06:40.975660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.451 [2024-04-19 04:06:40.975685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:40.993253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:40.993277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.009752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.009775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.025724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.025747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.042663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.042687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.052790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.052813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.067831] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.067855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.084245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.084269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.095409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.095432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.107013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.107036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.124179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.124203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.141989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.142013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.158640] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.158665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.175203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.175234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.185668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.185692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.201210] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.201234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.217470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.217494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.711 [2024-04-19 04:06:41.230405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.711 [2024-04-19 04:06:41.230428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.248476] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.248499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.264356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.264379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.280554] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.280578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.297159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.297183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.313366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.313389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.331592] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.331616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.347713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.347737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.358874] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.358899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.374213] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.374237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.390853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.390877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.406983] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.407006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.424039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.424062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.440624] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.440648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.458417] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.458441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.475098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.475123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.971 [2024-04-19 04:06:41.490827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.971 [2024-04-19 04:06:41.490851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.501880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.501903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.517351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.517374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.533961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.533984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.549451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.549474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.566262] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.566286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.581597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.581621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.598251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.598275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.614855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.614878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.632200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.632224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.649032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.649055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.665200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.665224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.681285] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.681309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.697877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.697900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.714529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.714552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.732265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.732289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.230 [2024-04-19 04:06:41.748696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.230 [2024-04-19 04:06:41.748719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.765132] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.765155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.776248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.776272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.792057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.792080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.808605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.808629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.826223] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.826247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.842428] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.842452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.861531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.861557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.877650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.877676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.893902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.893928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.910104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.910129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.926707] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.926731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.942969] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.942993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.961320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.961351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.977580] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.977605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 [2024-04-19 04:06:41.995456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.488 [2024-04-19 04:06:41.995481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.488 00:15:27.488 Latency(us) 00:15:27.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.489 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:27.489 Nvme1n1 : 5.01 11049.90 86.33 0.00 0.00 11569.57 5362.04 21209.83 00:15:27.489 =================================================================================================================== 00:15:27.489 Total : 11049.90 86.33 0.00 0.00 11569.57 5362.04 21209.83 00:15:27.489 [2024-04-19 04:06:42.006889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.489 [2024-04-19 04:06:42.006912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.018913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.018933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.030952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.030968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.042991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.043011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.055025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.055042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.067050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.067069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.079103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.079123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.091122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.091138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.103154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.103171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.115188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.115203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.127219] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.127232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.139251] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.139265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.151291] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.151306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.163325] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.163339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.175361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.175374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.187396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.187410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.199424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.199439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.211453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.211467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 [2024-04-19 04:06:42.223484] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.748 [2024-04-19 04:06:42.223498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3804770) - No such process 00:15:27.748 04:06:42 -- target/zcopy.sh@49 -- # wait 3804770 00:15:27.748 04:06:42 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:27.748 04:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.748 04:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:27.748 04:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.748 04:06:42 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:27.748 04:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.748 04:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:27.748 delay0 00:15:27.748 04:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.748 04:06:42 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:27.748 04:06:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.748 04:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:27.748 04:06:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.748 04:06:42 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:28.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.007 [2024-04-19 04:06:42.366436] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:34.633 Initializing NVMe Controllers 00:15:34.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:34.633 Initialization complete. Launching workers. 00:15:34.633 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 848 00:15:34.633 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1135, failed to submit 33 00:15:34.633 success 950, unsuccess 185, failed 0 00:15:34.633 04:06:48 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:34.633 04:06:48 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:34.633 04:06:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:34.633 04:06:48 -- nvmf/common.sh@117 -- # sync 00:15:34.633 04:06:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.633 04:06:48 -- nvmf/common.sh@120 -- # set +e 00:15:34.633 04:06:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.633 04:06:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.633 rmmod nvme_tcp 00:15:34.633 rmmod nvme_fabrics 00:15:34.633 rmmod nvme_keyring 00:15:34.633 04:06:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.633 04:06:48 -- nvmf/common.sh@124 -- # set -e 00:15:34.633 04:06:48 -- nvmf/common.sh@125 -- # return 0 00:15:34.633 04:06:48 -- nvmf/common.sh@478 -- # '[' -n 3802838 ']' 00:15:34.633 04:06:48 -- nvmf/common.sh@479 -- # killprocess 3802838 00:15:34.633 04:06:48 -- common/autotest_common.sh@936 -- # '[' -z 3802838 ']' 00:15:34.633 04:06:48 -- common/autotest_common.sh@940 -- # kill -0 3802838 00:15:34.633 04:06:48 -- common/autotest_common.sh@941 -- # uname 00:15:34.633 04:06:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.633 04:06:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3802838 00:15:34.633 04:06:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:34.633 04:06:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:34.633 04:06:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3802838' 00:15:34.633 killing process with pid 3802838 00:15:34.633 04:06:48 -- common/autotest_common.sh@955 -- # kill 3802838 00:15:34.633 04:06:48 -- common/autotest_common.sh@960 -- # wait 3802838 00:15:34.633 04:06:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:34.633 04:06:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:34.634 04:06:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:34.634 04:06:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.634 04:06:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.634 04:06:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.634 04:06:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.634 04:06:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.171 04:06:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.171 00:15:37.171 real 0m31.735s 00:15:37.171 user 0m43.812s 00:15:37.171 sys 0m9.973s 00:15:37.171 04:06:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:37.171 04:06:51 -- common/autotest_common.sh@10 -- # set +x 00:15:37.171 ************************************ 00:15:37.171 END TEST nvmf_zcopy 00:15:37.171 ************************************ 00:15:37.171 04:06:51 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:37.171 04:06:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.171 04:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.171 04:06:51 -- common/autotest_common.sh@10 -- # set +x 00:15:37.171 ************************************ 00:15:37.171 START TEST nvmf_nmic 00:15:37.171 ************************************ 00:15:37.171 04:06:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:37.171 * Looking for test storage... 00:15:37.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.171 04:06:51 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.171 04:06:51 -- nvmf/common.sh@7 -- # uname -s 00:15:37.171 04:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.171 04:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.171 04:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.171 04:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.171 04:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.171 04:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.171 04:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.171 04:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.171 04:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.171 04:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.171 04:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:37.171 04:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:37.171 04:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.171 04:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.171 04:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.171 04:06:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.171 04:06:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.171 04:06:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.171 04:06:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.171 04:06:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.171 04:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.171 04:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.171 04:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.171 04:06:51 -- paths/export.sh@5 -- # export PATH 00:15:37.171 04:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.171 04:06:51 -- nvmf/common.sh@47 -- # : 0 00:15:37.171 04:06:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.171 04:06:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.171 04:06:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.171 04:06:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.171 04:06:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.171 04:06:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.171 04:06:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.171 04:06:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.171 04:06:51 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.171 04:06:51 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.171 04:06:51 -- target/nmic.sh@14 -- # nvmftestinit 00:15:37.171 04:06:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:37.171 04:06:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.171 04:06:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:37.171 04:06:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:37.171 04:06:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:37.171 04:06:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.171 04:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.171 04:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.171 04:06:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:37.171 04:06:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:37.171 04:06:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.171 04:06:51 -- common/autotest_common.sh@10 -- # set +x 00:15:42.437 04:06:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.437 04:06:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.437 04:06:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.437 04:06:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.437 04:06:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.437 04:06:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.437 04:06:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.437 04:06:56 -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.437 04:06:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.437 04:06:56 -- nvmf/common.sh@296 -- # e810=() 00:15:42.437 04:06:56 -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.437 04:06:56 -- nvmf/common.sh@297 -- # x722=() 00:15:42.437 04:06:56 -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.437 04:06:56 -- nvmf/common.sh@298 -- # mlx=() 00:15:42.437 04:06:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.437 04:06:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.437 04:06:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.437 04:06:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.437 04:06:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.437 04:06:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:42.437 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:42.437 04:06:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.437 04:06:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:42.437 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:42.437 04:06:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.437 04:06:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.437 04:06:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.437 04:06:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:42.437 Found net devices under 0000:af:00.0: cvl_0_0 00:15:42.437 04:06:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.437 04:06:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.437 04:06:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.437 04:06:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.437 04:06:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:42.437 Found net devices under 0000:af:00.1: cvl_0_1 00:15:42.437 04:06:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.437 04:06:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:42.437 04:06:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:42.437 04:06:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:42.437 04:06:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.437 04:06:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.437 04:06:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.437 04:06:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.437 04:06:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.437 04:06:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.437 04:06:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.437 04:06:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.437 04:06:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.437 04:06:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.437 04:06:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.437 04:06:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.437 04:06:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.437 04:06:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.437 04:06:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.437 04:06:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.437 04:06:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.696 04:06:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.696 04:06:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.696 04:06:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:42.696 00:15:42.696 --- 10.0.0.2 ping statistics --- 00:15:42.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.696 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:42.696 04:06:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:15:42.696 00:15:42.696 --- 10.0.0.1 ping statistics --- 00:15:42.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.696 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:42.696 04:06:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.696 04:06:57 -- nvmf/common.sh@411 -- # return 0 00:15:42.696 04:06:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:42.696 04:06:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.696 04:06:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:42.696 04:06:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:42.696 04:06:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.696 04:06:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:42.696 04:06:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:42.696 04:06:57 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:42.696 04:06:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:42.696 04:06:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:42.696 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:42.696 04:06:57 -- nvmf/common.sh@470 -- # nvmfpid=3810639 00:15:42.696 04:06:57 -- nvmf/common.sh@471 -- # waitforlisten 3810639 00:15:42.696 04:06:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.696 04:06:57 -- common/autotest_common.sh@817 -- # '[' -z 3810639 ']' 00:15:42.696 04:06:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.696 04:06:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:42.696 04:06:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.696 04:06:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:42.696 04:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:42.696 [2024-04-19 04:06:57.096997] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:42.696 [2024-04-19 04:06:57.097051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.696 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.696 [2024-04-19 04:06:57.185068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.954 [2024-04-19 04:06:57.275919] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.954 [2024-04-19 04:06:57.275963] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.954 [2024-04-19 04:06:57.275974] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.954 [2024-04-19 04:06:57.275982] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.954 [2024-04-19 04:06:57.275989] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.954 [2024-04-19 04:06:57.276039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.954 [2024-04-19 04:06:57.276140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.954 [2024-04-19 04:06:57.276254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.954 [2024-04-19 04:06:57.276254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.517 04:06:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:43.517 04:06:58 -- common/autotest_common.sh@850 -- # return 0 00:15:43.517 04:06:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:43.517 04:06:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:43.517 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 04:06:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.774 04:06:58 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 [2024-04-19 04:06:58.081143] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 Malloc0 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 [2024-04-19 04:06:58.141109] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:43.774 test case1: single bdev can't be used in multiple subsystems 00:15:43.774 04:06:58 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@28 -- # nmic_status=0 00:15:43.774 04:06:58 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 [2024-04-19 04:06:58.165016] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:43.774 [2024-04-19 04:06:58.165041] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:43.774 [2024-04-19 04:06:58.165051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.774 request: 00:15:43.774 { 00:15:43.774 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:43.774 "namespace": { 00:15:43.774 "bdev_name": "Malloc0", 00:15:43.774 "no_auto_visible": false 00:15:43.774 }, 00:15:43.774 "method": "nvmf_subsystem_add_ns", 00:15:43.774 "req_id": 1 00:15:43.774 } 00:15:43.774 Got JSON-RPC error response 00:15:43.774 response: 00:15:43.774 { 00:15:43.774 "code": -32602, 00:15:43.774 "message": "Invalid parameters" 00:15:43.774 } 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@29 -- # nmic_status=1 00:15:43.774 04:06:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:43.774 04:06:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:43.774 Adding namespace failed - expected result. 00:15:43.774 04:06:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:43.774 test case2: host connect to nvmf target in multiple paths 00:15:43.774 04:06:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:43.774 04:06:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.774 04:06:58 -- common/autotest_common.sh@10 -- # set +x 00:15:43.774 [2024-04-19 04:06:58.177159] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.774 04:06:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.774 04:06:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.147 04:06:59 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:46.519 04:07:00 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.519 04:07:00 -- common/autotest_common.sh@1184 -- # local i=0 00:15:46.519 04:07:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.519 04:07:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:46.519 04:07:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:48.416 04:07:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:48.416 04:07:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:48.416 04:07:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.416 04:07:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:48.416 04:07:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.416 04:07:02 -- common/autotest_common.sh@1194 -- # return 0 00:15:48.416 04:07:02 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:48.416 [global] 00:15:48.416 thread=1 00:15:48.416 invalidate=1 00:15:48.416 rw=write 00:15:48.416 time_based=1 00:15:48.416 runtime=1 00:15:48.416 ioengine=libaio 00:15:48.416 direct=1 00:15:48.416 bs=4096 00:15:48.416 iodepth=1 00:15:48.416 norandommap=0 00:15:48.416 numjobs=1 00:15:48.416 00:15:48.416 verify_dump=1 00:15:48.416 verify_backlog=512 00:15:48.416 verify_state_save=0 00:15:48.416 do_verify=1 00:15:48.416 verify=crc32c-intel 00:15:48.416 [job0] 00:15:48.416 filename=/dev/nvme0n1 00:15:48.416 Could not set queue depth (nvme0n1) 00:15:48.674 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:48.674 fio-3.35 00:15:48.674 Starting 1 thread 00:15:50.045 00:15:50.045 job0: (groupid=0, jobs=1): err= 0: pid=3811937: Fri Apr 19 04:07:04 2024 00:15:50.045 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:15:50.045 slat (nsec): min=10367, max=23132, avg=21375.91, stdev=2542.39 00:15:50.045 clat (usec): min=40801, max=41164, avg=40979.03, stdev=100.06 00:15:50.045 lat (usec): min=40824, max=41187, avg=41000.41, stdev=98.96 00:15:50.045 clat percentiles (usec): 00:15:50.045 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:15:50.045 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:50.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:50.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:50.045 | 99.99th=[41157] 00:15:50.045 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:15:50.045 slat (nsec): min=10165, max=39813, avg=11656.85, stdev=2120.66 00:15:50.045 clat (usec): min=188, max=367, avg=221.02, stdev= 9.56 00:15:50.045 lat (usec): min=201, max=406, avg=232.68, stdev=10.27 00:15:50.045 clat percentiles (usec): 00:15:50.045 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 217], 00:15:50.045 | 30.00th=[ 219], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:15:50.045 | 70.00th=[ 225], 80.00th=[ 227], 90.00th=[ 229], 95.00th=[ 231], 00:15:50.045 | 99.00th=[ 239], 99.50th=[ 265], 99.90th=[ 367], 99.95th=[ 367], 00:15:50.045 | 99.99th=[ 367] 00:15:50.045 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:50.045 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:50.045 lat (usec) : 250=95.32%, 500=0.56% 00:15:50.045 lat (msec) : 50=4.12% 00:15:50.045 cpu : usr=0.88%, sys=0.49%, ctx=534, majf=0, minf=2 00:15:50.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.045 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.045 00:15:50.045 Run status group 0 (all jobs): 00:15:50.045 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:15:50.045 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:15:50.045 00:15:50.045 Disk stats (read/write): 00:15:50.045 nvme0n1: ios=68/512, merge=0/0, ticks=776/111, in_queue=887, util=92.18% 00:15:50.045 04:07:04 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:50.045 04:07:04 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.045 04:07:04 -- common/autotest_common.sh@1205 -- # local i=0 00:15:50.045 04:07:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:50.045 04:07:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.045 04:07:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:50.045 04:07:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.045 04:07:04 -- common/autotest_common.sh@1217 -- # return 0 00:15:50.045 04:07:04 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:50.045 04:07:04 -- target/nmic.sh@53 -- # nvmftestfini 00:15:50.045 04:07:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:50.045 04:07:04 -- nvmf/common.sh@117 -- # sync 00:15:50.045 04:07:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.045 04:07:04 -- nvmf/common.sh@120 -- # set +e 00:15:50.045 04:07:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.045 04:07:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.045 rmmod nvme_tcp 00:15:50.045 rmmod nvme_fabrics 00:15:50.045 rmmod nvme_keyring 00:15:50.045 04:07:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.045 04:07:04 -- nvmf/common.sh@124 -- # set -e 00:15:50.045 04:07:04 -- nvmf/common.sh@125 -- # return 0 00:15:50.045 04:07:04 -- nvmf/common.sh@478 -- # '[' -n 3810639 ']' 00:15:50.045 04:07:04 -- nvmf/common.sh@479 -- # killprocess 3810639 00:15:50.045 04:07:04 -- common/autotest_common.sh@936 -- # '[' -z 3810639 ']' 00:15:50.045 04:07:04 -- common/autotest_common.sh@940 -- # kill -0 3810639 00:15:50.045 04:07:04 -- common/autotest_common.sh@941 -- # uname 00:15:50.045 04:07:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.045 04:07:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3810639 00:15:50.304 04:07:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.304 04:07:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.304 04:07:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3810639' 00:15:50.304 killing process with pid 3810639 00:15:50.304 04:07:04 -- common/autotest_common.sh@955 -- # kill 3810639 00:15:50.304 04:07:04 -- common/autotest_common.sh@960 -- # wait 3810639 00:15:50.563 04:07:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:50.563 04:07:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:50.563 04:07:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:50.563 04:07:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.563 04:07:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.563 04:07:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.563 04:07:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.563 04:07:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.469 04:07:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:52.469 00:15:52.469 real 0m15.643s 00:15:52.469 user 0m42.692s 00:15:52.469 sys 0m5.127s 00:15:52.469 04:07:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.469 04:07:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.469 ************************************ 00:15:52.469 END TEST nvmf_nmic 00:15:52.469 ************************************ 00:15:52.469 04:07:06 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:52.469 04:07:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.469 04:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.469 04:07:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.728 ************************************ 00:15:52.728 START TEST nvmf_fio_target 00:15:52.728 ************************************ 00:15:52.728 04:07:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:52.728 * Looking for test storage... 00:15:52.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.728 04:07:07 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.728 04:07:07 -- nvmf/common.sh@7 -- # uname -s 00:15:52.728 04:07:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.728 04:07:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.728 04:07:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.728 04:07:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.728 04:07:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.728 04:07:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.728 04:07:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.728 04:07:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.728 04:07:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.728 04:07:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.728 04:07:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:52.728 04:07:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:52.728 04:07:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.728 04:07:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.728 04:07:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.728 04:07:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.728 04:07:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.728 04:07:07 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.728 04:07:07 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.728 04:07:07 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.728 04:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.728 04:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.728 04:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.728 04:07:07 -- paths/export.sh@5 -- # export PATH 00:15:52.728 04:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.728 04:07:07 -- nvmf/common.sh@47 -- # : 0 00:15:52.728 04:07:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.728 04:07:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.728 04:07:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.728 04:07:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.728 04:07:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.728 04:07:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.728 04:07:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.728 04:07:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.728 04:07:07 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:52.728 04:07:07 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:52.728 04:07:07 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:52.728 04:07:07 -- target/fio.sh@16 -- # nvmftestinit 00:15:52.728 04:07:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:52.728 04:07:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.728 04:07:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:52.728 04:07:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:52.728 04:07:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:52.728 04:07:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.728 04:07:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.728 04:07:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.728 04:07:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:52.728 04:07:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:52.728 04:07:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:52.728 04:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:59.310 04:07:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:59.310 04:07:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.311 04:07:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.311 04:07:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.311 04:07:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.311 04:07:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.311 04:07:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.311 04:07:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.311 04:07:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.311 04:07:12 -- nvmf/common.sh@296 -- # e810=() 00:15:59.311 04:07:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.311 04:07:12 -- nvmf/common.sh@297 -- # x722=() 00:15:59.311 04:07:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.311 04:07:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:59.311 04:07:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.311 04:07:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.311 04:07:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.311 04:07:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:59.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:59.311 04:07:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.311 04:07:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:59.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:59.311 04:07:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.311 04:07:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.311 04:07:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.311 04:07:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:59.311 Found net devices under 0000:af:00.0: cvl_0_0 00:15:59.311 04:07:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.311 04:07:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.311 04:07:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.311 04:07:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:59.311 Found net devices under 0000:af:00.1: cvl_0_1 00:15:59.311 04:07:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:59.311 04:07:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:59.311 04:07:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.311 04:07:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:59.311 04:07:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:59.311 04:07:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:59.311 04:07:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:59.311 04:07:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:59.311 04:07:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:59.311 04:07:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.311 04:07:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:59.311 04:07:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:59.311 04:07:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:59.311 04:07:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:59.311 04:07:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:59.311 04:07:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:59.311 04:07:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:59.311 04:07:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:59.311 04:07:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:59.311 04:07:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:59.311 04:07:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:59.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:15:59.311 00:15:59.311 --- 10.0.0.2 ping statistics --- 00:15:59.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.311 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:15:59.311 04:07:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:59.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:15:59.311 00:15:59.311 --- 10.0.0.1 ping statistics --- 00:15:59.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.311 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:59.311 04:07:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.311 04:07:12 -- nvmf/common.sh@411 -- # return 0 00:15:59.311 04:07:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:59.311 04:07:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.311 04:07:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:59.311 04:07:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.311 04:07:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:59.311 04:07:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:59.311 04:07:12 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:59.311 04:07:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:59.311 04:07:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:59.311 04:07:12 -- common/autotest_common.sh@10 -- # set +x 00:15:59.311 04:07:12 -- nvmf/common.sh@470 -- # nvmfpid=3815739 00:15:59.311 04:07:12 -- nvmf/common.sh@471 -- # waitforlisten 3815739 00:15:59.311 04:07:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.311 04:07:12 -- common/autotest_common.sh@817 -- # '[' -z 3815739 ']' 00:15:59.311 04:07:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.311 04:07:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:59.311 04:07:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.311 04:07:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:59.311 04:07:12 -- common/autotest_common.sh@10 -- # set +x 00:15:59.311 [2024-04-19 04:07:12.980739] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:15:59.311 [2024-04-19 04:07:12.980791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.311 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.311 [2024-04-19 04:07:13.067225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.311 [2024-04-19 04:07:13.156474] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.311 [2024-04-19 04:07:13.156519] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.311 [2024-04-19 04:07:13.156529] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.311 [2024-04-19 04:07:13.156538] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.311 [2024-04-19 04:07:13.156546] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.311 [2024-04-19 04:07:13.156596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.311 [2024-04-19 04:07:13.156695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.311 [2024-04-19 04:07:13.156786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.311 [2024-04-19 04:07:13.156786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.577 04:07:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:59.577 04:07:13 -- common/autotest_common.sh@850 -- # return 0 00:15:59.577 04:07:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:59.577 04:07:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:59.577 04:07:13 -- common/autotest_common.sh@10 -- # set +x 00:15:59.577 04:07:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.577 04:07:13 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:59.836 [2024-04-19 04:07:14.174002] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.836 04:07:14 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.094 04:07:14 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:00.094 04:07:14 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.353 04:07:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:00.353 04:07:14 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.612 04:07:15 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:00.612 04:07:15 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.871 04:07:15 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:00.871 04:07:15 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:01.130 04:07:15 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.389 04:07:15 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:01.389 04:07:15 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.647 04:07:16 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:01.647 04:07:16 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:01.905 04:07:16 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:01.905 04:07:16 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:02.164 04:07:16 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:02.423 04:07:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:02.423 04:07:16 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:02.681 04:07:17 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:02.681 04:07:17 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:02.940 04:07:17 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.198 [2024-04-19 04:07:17.577795] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.198 04:07:17 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:03.457 04:07:17 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:03.715 04:07:18 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.092 04:07:19 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:05.092 04:07:19 -- common/autotest_common.sh@1184 -- # local i=0 00:16:05.092 04:07:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.092 04:07:19 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:05.092 04:07:19 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:05.092 04:07:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:06.995 04:07:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:06.995 04:07:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:06.995 04:07:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.995 04:07:21 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:06.995 04:07:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.995 04:07:21 -- common/autotest_common.sh@1194 -- # return 0 00:16:06.995 04:07:21 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:06.995 [global] 00:16:06.995 thread=1 00:16:06.995 invalidate=1 00:16:06.995 rw=write 00:16:06.995 time_based=1 00:16:06.996 runtime=1 00:16:06.996 ioengine=libaio 00:16:06.996 direct=1 00:16:06.996 bs=4096 00:16:06.996 iodepth=1 00:16:06.996 norandommap=0 00:16:06.996 numjobs=1 00:16:06.996 00:16:06.996 verify_dump=1 00:16:06.996 verify_backlog=512 00:16:06.996 verify_state_save=0 00:16:06.996 do_verify=1 00:16:06.996 verify=crc32c-intel 00:16:06.996 [job0] 00:16:06.996 filename=/dev/nvme0n1 00:16:06.996 [job1] 00:16:06.996 filename=/dev/nvme0n2 00:16:06.996 [job2] 00:16:06.996 filename=/dev/nvme0n3 00:16:06.996 [job3] 00:16:06.996 filename=/dev/nvme0n4 00:16:06.996 Could not set queue depth (nvme0n1) 00:16:06.996 Could not set queue depth (nvme0n2) 00:16:06.996 Could not set queue depth (nvme0n3) 00:16:06.996 Could not set queue depth (nvme0n4) 00:16:07.563 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.563 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.563 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.563 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:07.563 fio-3.35 00:16:07.563 Starting 4 threads 00:16:08.940 00:16:08.940 job0: (groupid=0, jobs=1): err= 0: pid=3817525: Fri Apr 19 04:07:23 2024 00:16:08.940 read: IOPS=675, BW=2701KiB/s (2766kB/s)(2804KiB/1038msec) 00:16:08.940 slat (nsec): min=6262, max=30061, avg=7763.43, stdev=2577.27 00:16:08.940 clat (usec): min=232, max=42579, avg=1134.56, stdev=5722.14 00:16:08.940 lat (usec): min=241, max=42586, avg=1142.32, stdev=5723.93 00:16:08.940 clat percentiles (usec): 00:16:08.940 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 269], 00:16:08.940 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:16:08.940 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 420], 95.00th=[ 453], 00:16:08.940 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:16:08.940 | 99.99th=[42730] 00:16:08.940 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:16:08.941 slat (nsec): min=8954, max=97607, avg=11727.10, stdev=4424.51 00:16:08.941 clat (usec): min=146, max=987, avg=215.12, stdev=52.87 00:16:08.941 lat (usec): min=157, max=999, avg=226.85, stdev=53.97 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 180], 00:16:08.941 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:16:08.941 | 70.00th=[ 229], 80.00th=[ 249], 90.00th=[ 289], 95.00th=[ 318], 00:16:08.941 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 429], 99.95th=[ 988], 00:16:08.941 | 99.99th=[ 988] 00:16:08.941 bw ( KiB/s): min= 4087, max= 4096, per=41.55%, avg=4091.50, stdev= 6.36, samples=2 00:16:08.941 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:16:08.941 lat (usec) : 250=49.51%, 500=49.28%, 750=0.35%, 1000=0.06% 00:16:08.941 lat (msec) : 50=0.81% 00:16:08.941 cpu : usr=1.25%, sys=1.83%, ctx=1725, majf=0, minf=1 00:16:08.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 issued rwts: total=701,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.941 job1: (groupid=0, jobs=1): err= 0: pid=3817526: Fri Apr 19 04:07:23 2024 00:16:08.941 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:16:08.941 slat (nsec): min=10626, max=24420, avg=18478.81, stdev=5702.28 00:16:08.941 clat (usec): min=40815, max=41353, avg=40985.58, stdev=116.70 00:16:08.941 lat (usec): min=40838, max=41364, avg=41004.06, stdev=114.60 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:08.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:08.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:08.941 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:08.941 | 99.99th=[41157] 00:16:08.941 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:08.941 slat (usec): min=10, max=132, avg=17.44, stdev=15.08 00:16:08.941 clat (usec): min=136, max=923, avg=264.12, stdev=62.22 00:16:08.941 lat (usec): min=184, max=934, avg=281.56, stdev=62.66 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[ 176], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 231], 00:16:08.941 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 260], 00:16:08.941 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:16:08.941 | 99.00th=[ 416], 99.50th=[ 603], 99.90th=[ 922], 99.95th=[ 922], 00:16:08.941 | 99.99th=[ 922] 00:16:08.941 bw ( KiB/s): min= 4096, max= 4096, per=41.60%, avg=4096.00, stdev= 0.00, samples=1 00:16:08.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:08.941 lat (usec) : 250=52.72%, 500=42.59%, 750=0.38%, 1000=0.38% 00:16:08.941 lat (msec) : 50=3.94% 00:16:08.941 cpu : usr=0.60%, sys=0.79%, ctx=534, majf=0, minf=1 00:16:08.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.941 job2: (groupid=0, jobs=1): err= 0: pid=3817527: Fri Apr 19 04:07:23 2024 00:16:08.941 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:16:08.941 slat (nsec): min=10525, max=24955, avg=21629.90, stdev=2681.55 00:16:08.941 clat (usec): min=40867, max=41227, avg=40973.15, stdev=75.48 00:16:08.941 lat (usec): min=40890, max=41248, avg=40994.78, stdev=75.31 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:08.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:08.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:08.941 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:08.941 | 99.99th=[41157] 00:16:08.941 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:16:08.941 slat (nsec): min=10735, max=68080, avg=13613.15, stdev=6796.64 00:16:08.941 clat (usec): min=194, max=421, avg=271.92, stdev=35.53 00:16:08.941 lat (usec): min=206, max=442, avg=285.53, stdev=36.77 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:16:08.941 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:16:08.941 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 334], 00:16:08.941 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 420], 99.95th=[ 420], 00:16:08.941 | 99.99th=[ 420] 00:16:08.941 bw ( KiB/s): min= 4087, max= 4087, per=41.51%, avg=4087.00, stdev= 0.00, samples=1 00:16:08.941 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:08.941 lat (usec) : 250=31.71%, 500=64.35% 00:16:08.941 lat (msec) : 50=3.94% 00:16:08.941 cpu : usr=0.10%, sys=1.19%, ctx=533, majf=0, minf=2 00:16:08.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.941 job3: (groupid=0, jobs=1): err= 0: pid=3817528: Fri Apr 19 04:07:23 2024 00:16:08.941 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:16:08.941 slat (nsec): min=11091, max=25016, avg=22979.45, stdev=2748.25 00:16:08.941 clat (usec): min=40784, max=42079, avg=41018.14, stdev=251.90 00:16:08.941 lat (usec): min=40808, max=42090, avg=41041.12, stdev=249.40 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:08.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:08.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:08.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:08.941 | 99.99th=[42206] 00:16:08.941 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:08.941 slat (nsec): min=10992, max=51447, avg=13004.26, stdev=3023.49 00:16:08.941 clat (usec): min=210, max=1292, avg=250.54, stdev=67.65 00:16:08.941 lat (usec): min=222, max=1304, avg=263.55, stdev=67.79 00:16:08.941 clat percentiles (usec): 00:16:08.941 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:16:08.941 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:16:08.941 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 285], 00:16:08.941 | 99.00th=[ 367], 99.50th=[ 848], 99.90th=[ 1287], 99.95th=[ 1287], 00:16:08.941 | 99.99th=[ 1287] 00:16:08.941 bw ( KiB/s): min= 4087, max= 4087, per=41.51%, avg=4087.00, stdev= 0.00, samples=1 00:16:08.941 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:08.941 lat (usec) : 250=65.17%, 500=29.96%, 750=0.19%, 1000=0.37% 00:16:08.941 lat (msec) : 2=0.19%, 50=4.12% 00:16:08.941 cpu : usr=0.38%, sys=0.96%, ctx=535, majf=0, minf=1 00:16:08.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.941 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.941 00:16:08.941 Run status group 0 (all jobs): 00:16:08.941 READ: bw=2942KiB/s (3013kB/s), 83.2KiB/s-2701KiB/s (85.2kB/s-2766kB/s), io=3060KiB (3133kB), run=1008-1040msec 00:16:08.941 WRITE: bw=9846KiB/s (10.1MB/s), 1969KiB/s-3946KiB/s (2016kB/s-4041kB/s), io=10.0MiB (10.5MB), run=1008-1040msec 00:16:08.941 00:16:08.941 Disk stats (read/write): 00:16:08.941 nvme0n1: ios=744/1024, merge=0/0, ticks=614/215, in_queue=829, util=84.97% 00:16:08.941 nvme0n2: ios=39/512, merge=0/0, ticks=1563/127, in_queue=1690, util=87.53% 00:16:08.941 nvme0n3: ios=73/512, merge=0/0, ticks=751/132, in_queue=883, util=93.19% 00:16:08.941 nvme0n4: ios=73/512, merge=0/0, ticks=845/119, in_queue=964, util=94.08% 00:16:08.941 04:07:23 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:08.941 [global] 00:16:08.941 thread=1 00:16:08.941 invalidate=1 00:16:08.941 rw=randwrite 00:16:08.941 time_based=1 00:16:08.941 runtime=1 00:16:08.941 ioengine=libaio 00:16:08.941 direct=1 00:16:08.941 bs=4096 00:16:08.941 iodepth=1 00:16:08.941 norandommap=0 00:16:08.941 numjobs=1 00:16:08.941 00:16:08.941 verify_dump=1 00:16:08.941 verify_backlog=512 00:16:08.941 verify_state_save=0 00:16:08.941 do_verify=1 00:16:08.941 verify=crc32c-intel 00:16:08.941 [job0] 00:16:08.941 filename=/dev/nvme0n1 00:16:08.941 [job1] 00:16:08.941 filename=/dev/nvme0n2 00:16:08.941 [job2] 00:16:08.941 filename=/dev/nvme0n3 00:16:08.941 [job3] 00:16:08.941 filename=/dev/nvme0n4 00:16:08.941 Could not set queue depth (nvme0n1) 00:16:08.941 Could not set queue depth (nvme0n2) 00:16:08.941 Could not set queue depth (nvme0n3) 00:16:08.941 Could not set queue depth (nvme0n4) 00:16:09.200 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.200 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.200 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.200 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:09.200 fio-3.35 00:16:09.200 Starting 4 threads 00:16:10.577 00:16:10.577 job0: (groupid=0, jobs=1): err= 0: pid=3817944: Fri Apr 19 04:07:24 2024 00:16:10.577 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4128KiB/1022msec) 00:16:10.577 slat (nsec): min=6108, max=23160, avg=7299.67, stdev=1597.60 00:16:10.577 clat (usec): min=301, max=41990, avg=659.07, stdev=3577.72 00:16:10.577 lat (usec): min=308, max=42012, avg=666.37, stdev=3578.94 00:16:10.577 clat percentiles (usec): 00:16:10.577 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:16:10.577 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:16:10.577 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:16:10.577 | 99.00th=[ 474], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:16:10.577 | 99.99th=[42206] 00:16:10.577 write: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec); 0 zone resets 00:16:10.577 slat (nsec): min=8772, max=41390, avg=10323.84, stdev=1801.44 00:16:10.577 clat (usec): min=152, max=802, avg=203.60, stdev=29.88 00:16:10.577 lat (usec): min=162, max=812, avg=213.93, stdev=30.22 00:16:10.577 clat percentiles (usec): 00:16:10.577 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:16:10.577 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 202], 00:16:10.577 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 243], 00:16:10.577 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 701], 99.95th=[ 799], 00:16:10.577 | 99.99th=[ 799] 00:16:10.577 bw ( KiB/s): min= 4096, max= 8192, per=52.00%, avg=6144.00, stdev=2896.31, samples=2 00:16:10.577 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:16:10.577 lat (usec) : 250=58.33%, 500=41.20%, 750=0.12%, 1000=0.04% 00:16:10.577 lat (msec) : 50=0.31% 00:16:10.577 cpu : usr=1.08%, sys=2.55%, ctx=2570, majf=0, minf=1 00:16:10.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.577 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.577 job1: (groupid=0, jobs=1): err= 0: pid=3817945: Fri Apr 19 04:07:24 2024 00:16:10.577 read: IOPS=28, BW=115KiB/s (118kB/s)(120KiB/1040msec) 00:16:10.577 slat (nsec): min=6818, max=23377, avg=17002.43, stdev=6931.89 00:16:10.577 clat (usec): min=284, max=41994, avg=29548.51, stdev=18384.36 00:16:10.577 lat (usec): min=292, max=42018, avg=29565.51, stdev=18388.04 00:16:10.577 clat percentiles (usec): 00:16:10.577 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 404], 00:16:10.577 | 30.00th=[18220], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:10.577 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:16:10.577 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:10.577 | 99.99th=[42206] 00:16:10.577 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:16:10.577 slat (usec): min=9, max=36612, avg=82.85, stdev=1617.57 00:16:10.577 clat (usec): min=151, max=442, avg=210.77, stdev=39.06 00:16:10.577 lat (usec): min=161, max=36812, avg=293.62, stdev=1617.59 00:16:10.577 clat percentiles (usec): 00:16:10.577 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 178], 00:16:10.577 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 217], 00:16:10.577 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 277], 00:16:10.577 | 99.00th=[ 314], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 445], 00:16:10.577 | 99.99th=[ 445] 00:16:10.577 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:10.577 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:10.577 lat (usec) : 250=80.44%, 500=15.31%, 750=0.18% 00:16:10.577 lat (msec) : 20=0.18%, 50=3.87% 00:16:10.577 cpu : usr=0.29%, sys=0.58%, ctx=545, majf=0, minf=1 00:16:10.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.577 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.577 job2: (groupid=0, jobs=1): err= 0: pid=3817950: Fri Apr 19 04:07:24 2024 00:16:10.577 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:16:10.577 slat (nsec): min=10098, max=23373, avg=17044.77, stdev=5600.28 00:16:10.577 clat (usec): min=40908, max=42000, avg=41130.76, stdev=341.63 00:16:10.577 lat (usec): min=40927, max=42023, avg=41147.80, stdev=343.97 00:16:10.577 clat percentiles (usec): 00:16:10.578 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:10.578 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:10.578 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:16:10.578 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:10.578 | 99.99th=[42206] 00:16:10.578 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:16:10.578 slat (nsec): min=9433, max=38676, avg=12148.42, stdev=2646.14 00:16:10.578 clat (usec): min=156, max=340, avg=202.46, stdev=19.83 00:16:10.578 lat (usec): min=167, max=377, avg=214.61, stdev=20.25 00:16:10.578 clat percentiles (usec): 00:16:10.578 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:16:10.578 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:16:10.578 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 239], 00:16:10.578 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 343], 00:16:10.578 | 99.99th=[ 343] 00:16:10.578 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:10.578 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:10.578 lat (usec) : 250=93.63%, 500=2.25% 00:16:10.578 lat (msec) : 50=4.12% 00:16:10.578 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:16:10.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.578 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.578 job3: (groupid=0, jobs=1): err= 0: pid=3817951: Fri Apr 19 04:07:24 2024 00:16:10.578 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:16:10.578 slat (nsec): min=9111, max=22721, avg=16327.43, stdev=5228.72 00:16:10.578 clat (usec): min=509, max=42002, avg=39360.85, stdev=8476.48 00:16:10.578 lat (usec): min=522, max=42024, avg=39377.18, stdev=8477.34 00:16:10.578 clat percentiles (usec): 00:16:10.578 | 1.00th=[ 510], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:10.578 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:10.578 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:10.578 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:10.578 | 99.99th=[42206] 00:16:10.578 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:10.578 slat (nsec): min=9156, max=40541, avg=11995.78, stdev=2110.40 00:16:10.578 clat (usec): min=179, max=866, avg=238.10, stdev=56.40 00:16:10.578 lat (usec): min=190, max=876, avg=250.10, stdev=56.21 00:16:10.578 clat percentiles (usec): 00:16:10.578 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 215], 00:16:10.578 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:16:10.578 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:16:10.578 | 99.00th=[ 396], 99.50th=[ 775], 99.90th=[ 865], 99.95th=[ 865], 00:16:10.578 | 99.99th=[ 865] 00:16:10.578 bw ( KiB/s): min= 4096, max= 4096, per=34.67%, avg=4096.00, stdev= 0.00, samples=1 00:16:10.578 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:10.578 lat (usec) : 250=76.26%, 500=18.69%, 750=0.19%, 1000=0.75% 00:16:10.578 lat (msec) : 50=4.11% 00:16:10.578 cpu : usr=0.48%, sys=0.48%, ctx=535, majf=0, minf=2 00:16:10.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.578 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.578 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.578 00:16:10.578 Run status group 0 (all jobs): 00:16:10.578 READ: bw=4258KiB/s (4360kB/s), 86.4KiB/s-4039KiB/s (88.5kB/s-4136kB/s), io=4428KiB (4534kB), run=1018-1040msec 00:16:10.578 WRITE: bw=11.5MiB/s (12.1MB/s), 1969KiB/s-6012KiB/s (2016kB/s-6156kB/s), io=12.0MiB (12.6MB), run=1018-1040msec 00:16:10.578 00:16:10.578 Disk stats (read/write): 00:16:10.578 nvme0n1: ios=1050/1536, merge=0/0, ticks=1377/306, in_queue=1683, util=90.08% 00:16:10.578 nvme0n2: ios=76/512, merge=0/0, ticks=1657/104, in_queue=1761, util=94.21% 00:16:10.578 nvme0n3: ios=52/512, merge=0/0, ticks=936/100, in_queue=1036, util=99.06% 00:16:10.578 nvme0n4: ios=75/512, merge=0/0, ticks=777/122, in_queue=899, util=95.81% 00:16:10.578 04:07:24 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:10.578 [global] 00:16:10.578 thread=1 00:16:10.578 invalidate=1 00:16:10.578 rw=write 00:16:10.578 time_based=1 00:16:10.578 runtime=1 00:16:10.578 ioengine=libaio 00:16:10.578 direct=1 00:16:10.578 bs=4096 00:16:10.578 iodepth=128 00:16:10.578 norandommap=0 00:16:10.578 numjobs=1 00:16:10.578 00:16:10.578 verify_dump=1 00:16:10.578 verify_backlog=512 00:16:10.578 verify_state_save=0 00:16:10.578 do_verify=1 00:16:10.578 verify=crc32c-intel 00:16:10.578 [job0] 00:16:10.578 filename=/dev/nvme0n1 00:16:10.578 [job1] 00:16:10.578 filename=/dev/nvme0n2 00:16:10.578 [job2] 00:16:10.578 filename=/dev/nvme0n3 00:16:10.578 [job3] 00:16:10.578 filename=/dev/nvme0n4 00:16:10.578 Could not set queue depth (nvme0n1) 00:16:10.578 Could not set queue depth (nvme0n2) 00:16:10.578 Could not set queue depth (nvme0n3) 00:16:10.578 Could not set queue depth (nvme0n4) 00:16:10.836 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.836 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.836 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.836 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:10.837 fio-3.35 00:16:10.837 Starting 4 threads 00:16:11.788 00:16:11.788 job0: (groupid=0, jobs=1): err= 0: pid=3818385: Fri Apr 19 04:07:26 2024 00:16:11.788 read: IOPS=3558, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:16:11.788 slat (usec): min=2, max=14247, avg=129.32, stdev=975.94 00:16:11.788 clat (usec): min=1933, max=30601, avg=17052.60, stdev=3394.71 00:16:11.788 lat (usec): min=6816, max=30622, avg=17181.92, stdev=3478.34 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[10421], 5.00th=[12911], 10.00th=[14615], 20.00th=[15664], 00:16:11.788 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16319], 00:16:11.788 | 70.00th=[16712], 80.00th=[19530], 90.00th=[21103], 95.00th=[23987], 00:16:11.788 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:16:11.788 | 99.99th=[30540] 00:16:11.788 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:16:11.788 slat (usec): min=3, max=28131, avg=133.68, stdev=1165.37 00:16:11.788 clat (usec): min=1020, max=72807, avg=17578.79, stdev=8339.63 00:16:11.788 lat (usec): min=1029, max=72840, avg=17712.47, stdev=8454.70 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 6259], 5.00th=[11076], 10.00th=[12387], 20.00th=[14615], 00:16:11.788 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:16:11.788 | 70.00th=[16319], 80.00th=[16712], 90.00th=[24511], 95.00th=[37487], 00:16:11.788 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[66847], 00:16:11.788 | 99.99th=[72877] 00:16:11.788 bw ( KiB/s): min=13072, max=15600, per=23.05%, avg=14336.00, stdev=1787.57, samples=2 00:16:11.788 iops : min= 3268, max= 3900, avg=3584.00, stdev=446.89, samples=2 00:16:11.788 lat (msec) : 2=0.06%, 10=2.09%, 20=83.92%, 50=12.22%, 100=1.70% 00:16:11.788 cpu : usr=3.09%, sys=4.88%, ctx=276, majf=0, minf=1 00:16:11.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:11.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.788 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.788 job1: (groupid=0, jobs=1): err= 0: pid=3818389: Fri Apr 19 04:07:26 2024 00:16:11.788 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:16:11.788 slat (nsec): min=1961, max=15640k, avg=153000.77, stdev=1132413.72 00:16:11.788 clat (usec): min=3728, max=39665, avg=18423.66, stdev=5096.96 00:16:11.788 lat (usec): min=5652, max=39689, avg=18576.66, stdev=5172.59 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 7439], 5.00th=[12256], 10.00th=[13566], 20.00th=[15533], 00:16:11.788 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:16:11.788 | 70.00th=[20841], 80.00th=[23987], 90.00th=[25822], 95.00th=[27919], 00:16:11.788 | 99.00th=[30802], 99.50th=[31589], 99.90th=[37487], 99.95th=[39060], 00:16:11.788 | 99.99th=[39584] 00:16:11.788 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1013msec); 0 zone resets 00:16:11.788 slat (usec): min=3, max=21146, avg=107.59, stdev=552.89 00:16:11.788 clat (usec): min=3592, max=32476, avg=15020.01, stdev=3425.33 00:16:11.788 lat (usec): min=3603, max=35795, avg=15127.60, stdev=3467.12 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 4883], 5.00th=[ 7701], 10.00th=[ 9765], 20.00th=[12387], 00:16:11.788 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16319], 60.00th=[16319], 00:16:11.788 | 70.00th=[16450], 80.00th=[16581], 90.00th=[16909], 95.00th=[18220], 00:16:11.788 | 99.00th=[25035], 99.50th=[27132], 99.90th=[29754], 99.95th=[32375], 00:16:11.788 | 99.99th=[32375] 00:16:11.788 bw ( KiB/s): min=14600, max=16112, per=24.69%, avg=15356.00, stdev=1069.15, samples=2 00:16:11.788 iops : min= 3650, max= 4028, avg=3839.00, stdev=267.29, samples=2 00:16:11.788 lat (msec) : 4=0.17%, 10=6.41%, 20=76.91%, 50=16.50% 00:16:11.788 cpu : usr=3.95%, sys=4.05%, ctx=500, majf=0, minf=1 00:16:11.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:11.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.788 issued rwts: total=3584,3966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.788 job2: (groupid=0, jobs=1): err= 0: pid=3818404: Fri Apr 19 04:07:26 2024 00:16:11.788 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:16:11.788 slat (nsec): min=1942, max=4535.8k, avg=105856.21, stdev=556349.51 00:16:11.788 clat (usec): min=6815, max=19039, avg=13788.75, stdev=2135.11 00:16:11.788 lat (usec): min=6819, max=19655, avg=13894.60, stdev=2145.01 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 7308], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[12256], 00:16:11.788 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:16:11.788 | 70.00th=[14746], 80.00th=[15401], 90.00th=[16188], 95.00th=[16581], 00:16:11.788 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:16:11.788 | 99.99th=[19006] 00:16:11.788 write: IOPS=4814, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1003msec); 0 zone resets 00:16:11.788 slat (usec): min=3, max=4614, avg=100.17, stdev=526.29 00:16:11.788 clat (usec): min=262, max=18956, avg=13100.93, stdev=2885.75 00:16:11.788 lat (usec): min=4397, max=19237, avg=13201.09, stdev=2928.96 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 4883], 5.00th=[ 5473], 10.00th=[ 7832], 20.00th=[12256], 00:16:11.788 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:16:11.788 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14746], 95.00th=[16057], 00:16:11.788 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:16:11.788 | 99.99th=[19006] 00:16:11.788 bw ( KiB/s): min=17128, max=20480, per=30.23%, avg=18804.00, stdev=2370.22, samples=2 00:16:11.788 iops : min= 4282, max= 5120, avg=4701.00, stdev=592.56, samples=2 00:16:11.788 lat (usec) : 500=0.01% 00:16:11.788 lat (msec) : 10=10.28%, 20=89.71% 00:16:11.788 cpu : usr=4.39%, sys=5.79%, ctx=442, majf=0, minf=1 00:16:11.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:11.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.788 issued rwts: total=4608,4829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.788 job3: (groupid=0, jobs=1): err= 0: pid=3818410: Fri Apr 19 04:07:26 2024 00:16:11.788 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:16:11.788 slat (nsec): min=1915, max=17567k, avg=166263.13, stdev=1248649.50 00:16:11.788 clat (usec): min=6284, max=36768, avg=20115.20, stdev=4936.86 00:16:11.788 lat (usec): min=6289, max=36787, avg=20281.46, stdev=5024.86 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 7373], 5.00th=[14091], 10.00th=[17171], 20.00th=[17695], 00:16:11.788 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:16:11.788 | 70.00th=[19530], 80.00th=[21890], 90.00th=[28181], 95.00th=[31065], 00:16:11.788 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:16:11.788 | 99.99th=[36963] 00:16:11.788 write: IOPS=3343, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1014msec); 0 zone resets 00:16:11.788 slat (usec): min=3, max=45819, avg=137.97, stdev=1015.37 00:16:11.788 clat (usec): min=1550, max=47376, avg=17162.27, stdev=4013.40 00:16:11.788 lat (usec): min=1562, max=68108, avg=17300.24, stdev=4145.27 00:16:11.788 clat percentiles (usec): 00:16:11.788 | 1.00th=[ 5342], 5.00th=[ 8586], 10.00th=[11863], 20.00th=[14746], 00:16:11.788 | 30.00th=[15795], 40.00th=[17957], 50.00th=[18744], 60.00th=[19006], 00:16:11.788 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20055], 00:16:11.788 | 99.00th=[29754], 99.50th=[32375], 99.90th=[35390], 99.95th=[35914], 00:16:11.788 | 99.99th=[47449] 00:16:11.788 bw ( KiB/s): min=12288, max=13816, per=20.98%, avg=13052.00, stdev=1080.46, samples=2 00:16:11.788 iops : min= 3072, max= 3454, avg=3263.00, stdev=270.11, samples=2 00:16:11.788 lat (msec) : 2=0.09%, 10=4.19%, 20=80.45%, 50=15.26% 00:16:11.788 cpu : usr=2.47%, sys=4.24%, ctx=421, majf=0, minf=1 00:16:11.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:11.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.788 issued rwts: total=3072,3390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.788 00:16:11.788 Run status group 0 (all jobs): 00:16:11.788 READ: bw=57.2MiB/s (59.9MB/s), 11.8MiB/s-17.9MiB/s (12.4MB/s-18.8MB/s), io=58.0MiB (60.8MB), run=1003-1014msec 00:16:11.788 WRITE: bw=60.7MiB/s (63.7MB/s), 13.1MiB/s-18.8MiB/s (13.7MB/s-19.7MB/s), io=61.6MiB (64.6MB), run=1003-1014msec 00:16:11.788 00:16:11.788 Disk stats (read/write): 00:16:11.788 nvme0n1: ios=2926/3072, merge=0/0, ticks=48812/47503, in_queue=96315, util=91.58% 00:16:11.788 nvme0n2: ios=3091/3231, merge=0/0, ticks=56208/46690, in_queue=102898, util=95.83% 00:16:11.788 nvme0n3: ios=4022/4096, merge=0/0, ticks=18815/16502, in_queue=35317, util=96.87% 00:16:11.788 nvme0n4: ios=2582/2791, merge=0/0, ticks=50929/46216, in_queue=97145, util=100.00% 00:16:12.047 04:07:26 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:12.047 [global] 00:16:12.047 thread=1 00:16:12.047 invalidate=1 00:16:12.047 rw=randwrite 00:16:12.047 time_based=1 00:16:12.047 runtime=1 00:16:12.047 ioengine=libaio 00:16:12.047 direct=1 00:16:12.047 bs=4096 00:16:12.047 iodepth=128 00:16:12.047 norandommap=0 00:16:12.047 numjobs=1 00:16:12.047 00:16:12.047 verify_dump=1 00:16:12.047 verify_backlog=512 00:16:12.047 verify_state_save=0 00:16:12.047 do_verify=1 00:16:12.047 verify=crc32c-intel 00:16:12.047 [job0] 00:16:12.047 filename=/dev/nvme0n1 00:16:12.047 [job1] 00:16:12.047 filename=/dev/nvme0n2 00:16:12.047 [job2] 00:16:12.047 filename=/dev/nvme0n3 00:16:12.047 [job3] 00:16:12.047 filename=/dev/nvme0n4 00:16:12.047 Could not set queue depth (nvme0n1) 00:16:12.047 Could not set queue depth (nvme0n2) 00:16:12.047 Could not set queue depth (nvme0n3) 00:16:12.047 Could not set queue depth (nvme0n4) 00:16:12.306 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.306 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.306 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.306 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:12.306 fio-3.35 00:16:12.306 Starting 4 threads 00:16:13.682 00:16:13.682 job0: (groupid=0, jobs=1): err= 0: pid=3818850: Fri Apr 19 04:07:27 2024 00:16:13.683 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:16:13.683 slat (nsec): min=1922, max=28079k, avg=132396.55, stdev=1227606.01 00:16:13.683 clat (usec): min=4007, max=72433, avg=16418.74, stdev=10732.43 00:16:13.683 lat (usec): min=4012, max=72458, avg=16551.13, stdev=10863.39 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10159], 00:16:13.683 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[13173], 00:16:13.683 | 70.00th=[16319], 80.00th=[17433], 90.00th=[38011], 95.00th=[46924], 00:16:13.683 | 99.00th=[55313], 99.50th=[55313], 99.90th=[63177], 99.95th=[66323], 00:16:13.683 | 99.99th=[72877] 00:16:13.683 write: IOPS=4227, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1010msec); 0 zone resets 00:16:13.683 slat (usec): min=3, max=15390, avg=91.27, stdev=649.75 00:16:13.683 clat (usec): min=1547, max=65044, avg=14122.91, stdev=8195.19 00:16:13.683 lat (usec): min=1559, max=65055, avg=14214.18, stdev=8252.15 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 3785], 5.00th=[ 5997], 10.00th=[ 7242], 20.00th=[ 9241], 00:16:13.683 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11994], 60.00th=[12518], 00:16:13.683 | 70.00th=[12780], 80.00th=[18744], 90.00th=[24511], 95.00th=[31851], 00:16:13.683 | 99.00th=[43779], 99.50th=[46400], 99.90th=[48497], 99.95th=[50070], 00:16:13.683 | 99.99th=[65274] 00:16:13.683 bw ( KiB/s): min=13652, max=19464, per=26.97%, avg=16558.00, stdev=4109.70, samples=2 00:16:13.683 iops : min= 3413, max= 4866, avg=4139.50, stdev=1027.43, samples=2 00:16:13.683 lat (msec) : 2=0.02%, 4=0.65%, 10=21.40%, 20=62.78%, 50=14.32% 00:16:13.683 lat (msec) : 100=0.84% 00:16:13.683 cpu : usr=3.37%, sys=5.65%, ctx=416, majf=0, minf=1 00:16:13.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:13.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.683 issued rwts: total=4096,4270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.683 job1: (groupid=0, jobs=1): err= 0: pid=3818862: Fri Apr 19 04:07:27 2024 00:16:13.683 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:16:13.683 slat (nsec): min=1566, max=14325k, avg=119017.84, stdev=876959.90 00:16:13.683 clat (usec): min=4587, max=31683, avg=15745.16, stdev=4438.76 00:16:13.683 lat (usec): min=4620, max=31686, avg=15864.18, stdev=4507.84 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 7373], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11338], 00:16:13.683 | 30.00th=[13435], 40.00th=[14484], 50.00th=[16188], 60.00th=[16450], 00:16:13.683 | 70.00th=[17957], 80.00th=[18482], 90.00th=[21103], 95.00th=[24511], 00:16:13.683 | 99.00th=[28705], 99.50th=[29492], 99.90th=[31065], 99.95th=[31065], 00:16:13.683 | 99.99th=[31589] 00:16:13.683 write: IOPS=4429, BW=17.3MiB/s (18.1MB/s)(17.5MiB/1011msec); 0 zone resets 00:16:13.683 slat (usec): min=2, max=12804, avg=94.43, stdev=647.78 00:16:13.683 clat (usec): min=270, max=49028, avg=14255.33, stdev=5759.67 00:16:13.683 lat (usec): min=498, max=49033, avg=14349.76, stdev=5796.10 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 2474], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 8979], 00:16:13.683 | 30.00th=[10945], 40.00th=[13698], 50.00th=[15664], 60.00th=[16581], 00:16:13.683 | 70.00th=[17171], 80.00th=[17695], 90.00th=[20317], 95.00th=[21627], 00:16:13.683 | 99.00th=[30016], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:16:13.683 | 99.99th=[49021] 00:16:13.683 bw ( KiB/s): min=16135, max=18632, per=28.31%, avg=17383.50, stdev=1765.65, samples=2 00:16:13.683 iops : min= 4033, max= 4658, avg=4345.50, stdev=441.94, samples=2 00:16:13.683 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.07% 00:16:13.683 lat (msec) : 2=0.17%, 4=1.07%, 10=15.22%, 20=70.98%, 50=12.41% 00:16:13.683 cpu : usr=2.57%, sys=4.85%, ctx=353, majf=0, minf=1 00:16:13.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:13.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.683 issued rwts: total=4096,4478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.683 job2: (groupid=0, jobs=1): err= 0: pid=3818885: Fri Apr 19 04:07:27 2024 00:16:13.683 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec) 00:16:13.683 slat (usec): min=2, max=18222, avg=149.08, stdev=1165.18 00:16:13.683 clat (usec): min=5196, max=36698, avg=18253.46, stdev=4201.62 00:16:13.683 lat (usec): min=5202, max=36724, avg=18402.54, stdev=4295.54 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 7111], 5.00th=[15008], 10.00th=[15664], 20.00th=[15926], 00:16:13.683 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16712], 60.00th=[18482], 00:16:13.683 | 70.00th=[19006], 80.00th=[19530], 90.00th=[24511], 95.00th=[27132], 00:16:13.683 | 99.00th=[31065], 99.50th=[33162], 99.90th=[35390], 99.95th=[36439], 00:16:13.683 | 99.99th=[36439] 00:16:13.683 write: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(15.0MiB/1020msec); 0 zone resets 00:16:13.683 slat (usec): min=3, max=15738, avg=117.07, stdev=720.39 00:16:13.683 clat (usec): min=3402, max=50419, avg=16609.93, stdev=6328.77 00:16:13.683 lat (usec): min=3413, max=50431, avg=16726.99, stdev=6370.37 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[ 4146], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[13435], 00:16:13.683 | 30.00th=[15533], 40.00th=[16319], 50.00th=[16450], 60.00th=[16712], 00:16:13.683 | 70.00th=[17171], 80.00th=[19006], 90.00th=[19268], 95.00th=[26084], 00:16:13.683 | 99.00th=[46400], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:16:13.683 | 99.99th=[50594] 00:16:13.683 bw ( KiB/s): min=14323, max=15328, per=24.14%, avg=14825.50, stdev=710.64, samples=2 00:16:13.683 iops : min= 3580, max= 3832, avg=3706.00, stdev=178.19, samples=2 00:16:13.683 lat (msec) : 4=0.24%, 10=5.71%, 20=81.38%, 50=12.60%, 100=0.07% 00:16:13.683 cpu : usr=3.73%, sys=4.22%, ctx=430, majf=0, minf=1 00:16:13.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:13.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.683 issued rwts: total=3584,3838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.683 job3: (groupid=0, jobs=1): err= 0: pid=3818893: Fri Apr 19 04:07:27 2024 00:16:13.683 read: IOPS=2591, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1009msec) 00:16:13.683 slat (usec): min=2, max=9176, avg=140.32, stdev=793.71 00:16:13.683 clat (usec): min=7177, max=33169, avg=17294.20, stdev=2992.57 00:16:13.683 lat (usec): min=8142, max=33174, avg=17434.52, stdev=3055.92 00:16:13.683 clat percentiles (usec): 00:16:13.683 | 1.00th=[10683], 5.00th=[12256], 10.00th=[13829], 20.00th=[15795], 00:16:13.683 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16581], 60.00th=[17695], 00:16:13.683 | 70.00th=[18482], 80.00th=[19530], 90.00th=[20841], 95.00th=[21890], 00:16:13.683 | 99.00th=[25560], 99.50th=[27919], 99.90th=[33162], 99.95th=[33162], 00:16:13.683 | 99.99th=[33162] 00:16:13.683 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:16:13.683 slat (usec): min=3, max=14536, avg=198.76, stdev=950.75 00:16:13.683 clat (msec): min=9, max=106, avg=26.64, stdev=19.77 00:16:13.683 lat (msec): min=9, max=106, avg=26.84, stdev=19.90 00:16:13.683 clat percentiles (msec): 00:16:13.683 | 1.00th=[ 11], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:16:13.683 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 20], 00:16:13.683 | 70.00th=[ 20], 80.00th=[ 31], 90.00th=[ 65], 95.00th=[ 74], 00:16:13.683 | 99.00th=[ 99], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 107], 00:16:13.683 | 99.99th=[ 107] 00:16:13.683 bw ( KiB/s): min=11688, max=12288, per=19.52%, avg=11988.00, stdev=424.26, samples=2 00:16:13.683 iops : min= 2922, max= 3072, avg=2997.00, stdev=106.07, samples=2 00:16:13.683 lat (msec) : 10=0.49%, 20=75.56%, 50=16.12%, 100=7.42%, 250=0.40% 00:16:13.683 cpu : usr=2.98%, sys=3.37%, ctx=414, majf=0, minf=1 00:16:13.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:13.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.683 issued rwts: total=2615,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.683 00:16:13.683 Run status group 0 (all jobs): 00:16:13.683 READ: bw=55.1MiB/s (57.8MB/s), 10.1MiB/s-15.8MiB/s (10.6MB/s-16.6MB/s), io=56.2MiB (58.9MB), run=1009-1020msec 00:16:13.683 WRITE: bw=60.0MiB/s (62.9MB/s), 11.9MiB/s-17.3MiB/s (12.5MB/s-18.1MB/s), io=61.2MiB (64.1MB), run=1009-1020msec 00:16:13.683 00:16:13.683 Disk stats (read/write): 00:16:13.683 nvme0n1: ios=3047/3072, merge=0/0, ticks=43534/34494, in_queue=78028, util=91.28% 00:16:13.683 nvme0n2: ios=3634/3743, merge=0/0, ticks=52260/49063, in_queue=101323, util=93.47% 00:16:13.683 nvme0n3: ios=2957/3072, merge=0/0, ticks=54090/48265, in_queue=102355, util=97.56% 00:16:13.683 nvme0n4: ios=2581/2655, merge=0/0, ticks=23223/28018, in_queue=51241, util=99.89% 00:16:13.683 04:07:27 -- target/fio.sh@55 -- # sync 00:16:13.683 04:07:27 -- target/fio.sh@59 -- # fio_pid=3819058 00:16:13.683 04:07:27 -- target/fio.sh@61 -- # sleep 3 00:16:13.683 04:07:27 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:13.683 [global] 00:16:13.683 thread=1 00:16:13.683 invalidate=1 00:16:13.683 rw=read 00:16:13.683 time_based=1 00:16:13.683 runtime=10 00:16:13.683 ioengine=libaio 00:16:13.683 direct=1 00:16:13.683 bs=4096 00:16:13.683 iodepth=1 00:16:13.683 norandommap=1 00:16:13.683 numjobs=1 00:16:13.683 00:16:13.683 [job0] 00:16:13.683 filename=/dev/nvme0n1 00:16:13.683 [job1] 00:16:13.683 filename=/dev/nvme0n2 00:16:13.683 [job2] 00:16:13.683 filename=/dev/nvme0n3 00:16:13.683 [job3] 00:16:13.683 filename=/dev/nvme0n4 00:16:13.683 Could not set queue depth (nvme0n1) 00:16:13.683 Could not set queue depth (nvme0n2) 00:16:13.683 Could not set queue depth (nvme0n3) 00:16:13.683 Could not set queue depth (nvme0n4) 00:16:13.942 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.942 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.942 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.942 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.942 fio-3.35 00:16:13.942 Starting 4 threads 00:16:16.475 04:07:30 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:16.734 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14950400, buflen=4096 00:16:16.734 fio: pid=3819377, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:16.734 04:07:31 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:16.993 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=33423360, buflen=4096 00:16:16.993 fio: pid=3819369, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:16.993 04:07:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:16.993 04:07:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:17.251 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17788928, buflen=4096 00:16:17.251 fio: pid=3819342, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:17.251 04:07:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.251 04:07:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:17.510 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=15507456, buflen=4096 00:16:17.510 fio: pid=3819347, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:17.510 04:07:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.510 04:07:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:17.510 00:16:17.510 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3819342: Fri Apr 19 04:07:32 2024 00:16:17.510 read: IOPS=1363, BW=5454KiB/s (5585kB/s)(17.0MiB/3185msec) 00:16:17.510 slat (usec): min=6, max=11785, avg=10.09, stdev=178.71 00:16:17.510 clat (usec): min=268, max=49186, avg=717.04, stdev=3913.81 00:16:17.510 lat (usec): min=275, max=49208, avg=727.11, stdev=3918.96 00:16:17.510 clat percentiles (usec): 00:16:17.510 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:16:17.510 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:16:17.510 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 396], 95.00th=[ 420], 00:16:17.510 | 99.00th=[ 717], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:16:17.510 | 99.99th=[49021] 00:16:17.510 bw ( KiB/s): min= 96, max=12144, per=24.56%, avg=5690.17, stdev=4573.82, samples=6 00:16:17.510 iops : min= 24, max= 3036, avg=1422.50, stdev=1143.48, samples=6 00:16:17.510 lat (usec) : 500=98.57%, 750=0.44% 00:16:17.510 lat (msec) : 2=0.02%, 4=0.02%, 50=0.92% 00:16:17.510 cpu : usr=0.50%, sys=1.13%, ctx=4346, majf=0, minf=1 00:16:17.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 issued rwts: total=4344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.510 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3819347: Fri Apr 19 04:07:32 2024 00:16:17.510 read: IOPS=1100, BW=4400KiB/s (4505kB/s)(14.8MiB/3442msec) 00:16:17.510 slat (usec): min=6, max=6527, avg=10.02, stdev=105.96 00:16:17.510 clat (usec): min=305, max=41998, avg=896.84, stdev=4545.60 00:16:17.510 lat (usec): min=313, max=42022, avg=905.14, stdev=4547.51 00:16:17.510 clat percentiles (usec): 00:16:17.510 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:16:17.510 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:16:17.510 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 449], 95.00th=[ 494], 00:16:17.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:16:17.510 | 99.99th=[42206] 00:16:17.510 bw ( KiB/s): min= 96, max=10656, per=21.72%, avg=5033.33, stdev=5436.15, samples=6 00:16:17.510 iops : min= 24, max= 2664, avg=1258.33, stdev=1359.04, samples=6 00:16:17.510 lat (usec) : 500=95.51%, 750=3.20% 00:16:17.510 lat (msec) : 50=1.27% 00:16:17.510 cpu : usr=0.90%, sys=1.77%, ctx=3789, majf=0, minf=1 00:16:17.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 issued rwts: total=3787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.510 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3819369: Fri Apr 19 04:07:32 2024 00:16:17.510 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(31.9MiB/2920msec) 00:16:17.510 slat (usec): min=7, max=8448, avg=10.14, stdev=119.87 00:16:17.510 clat (usec): min=227, max=2265, avg=343.15, stdev=64.12 00:16:17.510 lat (usec): min=235, max=8908, avg=353.29, stdev=138.07 00:16:17.510 clat percentiles (usec): 00:16:17.510 | 1.00th=[ 245], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 285], 00:16:17.510 | 30.00th=[ 306], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 363], 00:16:17.510 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 420], 00:16:17.510 | 99.00th=[ 515], 99.50th=[ 537], 99.90th=[ 578], 99.95th=[ 603], 00:16:17.510 | 99.99th=[ 2278] 00:16:17.510 bw ( KiB/s): min= 9576, max=13696, per=47.70%, avg=11052.80, stdev=1576.74, samples=5 00:16:17.510 iops : min= 2394, max= 3424, avg=2763.20, stdev=394.18, samples=5 00:16:17.510 lat (usec) : 250=2.05%, 500=96.56%, 750=1.35% 00:16:17.510 lat (msec) : 2=0.01%, 4=0.02% 00:16:17.510 cpu : usr=1.40%, sys=4.80%, ctx=8163, majf=0, minf=1 00:16:17.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 issued rwts: total=8161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.510 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3819377: Fri Apr 19 04:07:32 2024 00:16:17.510 read: IOPS=1363, BW=5454KiB/s (5585kB/s)(14.3MiB/2677msec) 00:16:17.510 slat (nsec): min=7243, max=38089, avg=8403.12, stdev=1800.99 00:16:17.510 clat (usec): min=308, max=41975, avg=717.09, stdev=3608.25 00:16:17.510 lat (usec): min=316, max=41999, avg=725.48, stdev=3609.64 00:16:17.510 clat percentiles (usec): 00:16:17.510 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:16:17.510 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:16:17.510 | 70.00th=[ 404], 80.00th=[ 416], 90.00th=[ 445], 95.00th=[ 482], 00:16:17.510 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:16:17.510 | 99.99th=[42206] 00:16:17.510 bw ( KiB/s): min= 96, max=10040, per=25.17%, avg=5833.60, stdev=5253.01, samples=5 00:16:17.510 iops : min= 24, max= 2510, avg=1458.40, stdev=1313.25, samples=5 00:16:17.510 lat (usec) : 500=97.26%, 750=1.92% 00:16:17.510 lat (msec) : 50=0.79% 00:16:17.510 cpu : usr=0.56%, sys=2.54%, ctx=3651, majf=0, minf=2 00:16:17.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.510 issued rwts: total=3651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.510 00:16:17.510 Run status group 0 (all jobs): 00:16:17.510 READ: bw=22.6MiB/s (23.7MB/s), 4400KiB/s-10.9MiB/s (4505kB/s-11.4MB/s), io=77.9MiB (81.7MB), run=2677-3442msec 00:16:17.510 00:16:17.510 Disk stats (read/write): 00:16:17.510 nvme0n1: ios=4341/0, merge=0/0, ticks=3010/0, in_queue=3010, util=94.67% 00:16:17.510 nvme0n2: ios=3783/0, merge=0/0, ticks=3237/0, in_queue=3237, util=95.86% 00:16:17.511 nvme0n3: ios=7995/0, merge=0/0, ticks=2636/0, in_queue=2636, util=96.36% 00:16:17.511 nvme0n4: ios=3648/0, merge=0/0, ticks=2493/0, in_queue=2493, util=96.42% 00:16:17.768 04:07:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:17.768 04:07:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:18.026 04:07:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.026 04:07:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:18.297 04:07:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.297 04:07:32 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:18.583 04:07:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:18.583 04:07:33 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:18.840 04:07:33 -- target/fio.sh@69 -- # fio_status=0 00:16:18.840 04:07:33 -- target/fio.sh@70 -- # wait 3819058 00:16:18.840 04:07:33 -- target/fio.sh@70 -- # fio_status=4 00:16:18.840 04:07:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.099 04:07:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.099 04:07:33 -- common/autotest_common.sh@1205 -- # local i=0 00:16:19.099 04:07:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.099 04:07:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:19.099 04:07:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:19.099 04:07:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.099 04:07:33 -- common/autotest_common.sh@1217 -- # return 0 00:16:19.099 04:07:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:19.099 04:07:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:19.099 nvmf hotplug test: fio failed as expected 00:16:19.099 04:07:33 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.357 04:07:33 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:19.357 04:07:33 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:19.357 04:07:33 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:19.357 04:07:33 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:19.357 04:07:33 -- target/fio.sh@91 -- # nvmftestfini 00:16:19.357 04:07:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:19.357 04:07:33 -- nvmf/common.sh@117 -- # sync 00:16:19.357 04:07:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.357 04:07:33 -- nvmf/common.sh@120 -- # set +e 00:16:19.357 04:07:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.357 04:07:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.357 rmmod nvme_tcp 00:16:19.357 rmmod nvme_fabrics 00:16:19.357 rmmod nvme_keyring 00:16:19.357 04:07:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.357 04:07:33 -- nvmf/common.sh@124 -- # set -e 00:16:19.357 04:07:33 -- nvmf/common.sh@125 -- # return 0 00:16:19.357 04:07:33 -- nvmf/common.sh@478 -- # '[' -n 3815739 ']' 00:16:19.357 04:07:33 -- nvmf/common.sh@479 -- # killprocess 3815739 00:16:19.357 04:07:33 -- common/autotest_common.sh@936 -- # '[' -z 3815739 ']' 00:16:19.357 04:07:33 -- common/autotest_common.sh@940 -- # kill -0 3815739 00:16:19.357 04:07:33 -- common/autotest_common.sh@941 -- # uname 00:16:19.357 04:07:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.357 04:07:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3815739 00:16:19.357 04:07:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:19.357 04:07:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:19.357 04:07:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3815739' 00:16:19.357 killing process with pid 3815739 00:16:19.357 04:07:33 -- common/autotest_common.sh@955 -- # kill 3815739 00:16:19.357 04:07:33 -- common/autotest_common.sh@960 -- # wait 3815739 00:16:19.615 04:07:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:19.615 04:07:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:19.615 04:07:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:19.615 04:07:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.615 04:07:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.615 04:07:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.615 04:07:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.616 04:07:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.151 04:07:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.151 00:16:22.151 real 0m29.011s 00:16:22.151 user 2m24.845s 00:16:22.151 sys 0m8.432s 00:16:22.151 04:07:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.151 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.151 ************************************ 00:16:22.151 END TEST nvmf_fio_target 00:16:22.151 ************************************ 00:16:22.151 04:07:36 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.151 04:07:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:22.151 04:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.151 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.151 ************************************ 00:16:22.151 START TEST nvmf_bdevio 00:16:22.151 ************************************ 00:16:22.151 04:07:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.151 * Looking for test storage... 00:16:22.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.151 04:07:36 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.151 04:07:36 -- nvmf/common.sh@7 -- # uname -s 00:16:22.151 04:07:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.151 04:07:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.151 04:07:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.151 04:07:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.151 04:07:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.151 04:07:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.151 04:07:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.151 04:07:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.151 04:07:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.151 04:07:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.151 04:07:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:22.151 04:07:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:22.151 04:07:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.151 04:07:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.151 04:07:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.151 04:07:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.151 04:07:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.151 04:07:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.151 04:07:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.151 04:07:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.151 04:07:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.151 04:07:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.151 04:07:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.151 04:07:36 -- paths/export.sh@5 -- # export PATH 00:16:22.151 04:07:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.151 04:07:36 -- nvmf/common.sh@47 -- # : 0 00:16:22.151 04:07:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.151 04:07:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.151 04:07:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.151 04:07:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.151 04:07:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.151 04:07:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.151 04:07:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.151 04:07:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.151 04:07:36 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.151 04:07:36 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.151 04:07:36 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:22.151 04:07:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:22.151 04:07:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.151 04:07:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:22.151 04:07:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:22.151 04:07:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:22.151 04:07:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.151 04:07:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.151 04:07:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.151 04:07:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:22.151 04:07:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:22.151 04:07:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.151 04:07:36 -- common/autotest_common.sh@10 -- # set +x 00:16:27.425 04:07:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.425 04:07:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.425 04:07:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.425 04:07:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.425 04:07:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.425 04:07:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.425 04:07:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.425 04:07:41 -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.425 04:07:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.425 04:07:41 -- nvmf/common.sh@296 -- # e810=() 00:16:27.425 04:07:41 -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.425 04:07:41 -- nvmf/common.sh@297 -- # x722=() 00:16:27.425 04:07:41 -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.425 04:07:41 -- nvmf/common.sh@298 -- # mlx=() 00:16:27.425 04:07:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.425 04:07:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.425 04:07:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.425 04:07:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.425 04:07:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.425 04:07:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.425 04:07:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.425 04:07:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.425 04:07:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.425 04:07:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:27.425 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:27.425 04:07:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.425 04:07:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.425 04:07:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.426 04:07:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:27.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:27.426 04:07:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.426 04:07:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.426 04:07:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.426 04:07:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.426 04:07:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.426 04:07:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:27.426 Found net devices under 0000:af:00.0: cvl_0_0 00:16:27.426 04:07:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.426 04:07:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.426 04:07:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.426 04:07:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:27.426 04:07:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.426 04:07:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:27.426 Found net devices under 0000:af:00.1: cvl_0_1 00:16:27.426 04:07:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.426 04:07:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:27.426 04:07:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:27.426 04:07:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:27.426 04:07:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:27.426 04:07:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.426 04:07:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.426 04:07:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.426 04:07:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.426 04:07:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.426 04:07:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.426 04:07:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.426 04:07:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.426 04:07:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.426 04:07:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.426 04:07:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.426 04:07:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.426 04:07:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.684 04:07:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.684 04:07:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.684 04:07:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.684 04:07:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.684 04:07:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.684 04:07:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.684 04:07:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:16:27.684 00:16:27.685 --- 10.0.0.2 ping statistics --- 00:16:27.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.685 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:16:27.685 04:07:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:16:27.685 00:16:27.685 --- 10.0.0.1 ping statistics --- 00:16:27.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.685 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:16:27.685 04:07:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.685 04:07:42 -- nvmf/common.sh@411 -- # return 0 00:16:27.685 04:07:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:27.685 04:07:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.685 04:07:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:27.685 04:07:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:27.685 04:07:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.685 04:07:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:27.685 04:07:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:27.685 04:07:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:27.685 04:07:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:27.685 04:07:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:27.685 04:07:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.685 04:07:42 -- nvmf/common.sh@470 -- # nvmfpid=3824019 00:16:27.685 04:07:42 -- nvmf/common.sh@471 -- # waitforlisten 3824019 00:16:27.685 04:07:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:27.685 04:07:42 -- common/autotest_common.sh@817 -- # '[' -z 3824019 ']' 00:16:27.685 04:07:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.685 04:07:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.685 04:07:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.685 04:07:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.685 04:07:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.685 [2024-04-19 04:07:42.189084] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:16:27.685 [2024-04-19 04:07:42.189140] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.943 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.943 [2024-04-19 04:07:42.274541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.943 [2024-04-19 04:07:42.363022] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.943 [2024-04-19 04:07:42.363065] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.943 [2024-04-19 04:07:42.363076] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.943 [2024-04-19 04:07:42.363085] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.943 [2024-04-19 04:07:42.363093] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.943 [2024-04-19 04:07:42.363214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.943 [2024-04-19 04:07:42.363324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:27.943 [2024-04-19 04:07:42.363437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.943 [2024-04-19 04:07:42.363436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:28.875 04:07:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.875 04:07:43 -- common/autotest_common.sh@850 -- # return 0 00:16:28.875 04:07:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:28.875 04:07:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 04:07:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.875 04:07:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.875 04:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 [2024-04-19 04:07:43.178147] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.875 04:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.875 04:07:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.875 04:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 Malloc0 00:16:28.875 04:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.875 04:07:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.875 04:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 04:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.875 04:07:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.875 04:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 04:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.875 04:07:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.875 04:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.875 04:07:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.875 [2024-04-19 04:07:43.237856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.875 04:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.875 04:07:43 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:28.875 04:07:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:28.875 04:07:43 -- nvmf/common.sh@521 -- # config=() 00:16:28.875 04:07:43 -- nvmf/common.sh@521 -- # local subsystem config 00:16:28.875 04:07:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:28.875 04:07:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:28.875 { 00:16:28.875 "params": { 00:16:28.875 "name": "Nvme$subsystem", 00:16:28.875 "trtype": "$TEST_TRANSPORT", 00:16:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.875 "adrfam": "ipv4", 00:16:28.875 "trsvcid": "$NVMF_PORT", 00:16:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.875 "hdgst": ${hdgst:-false}, 00:16:28.875 "ddgst": ${ddgst:-false} 00:16:28.875 }, 00:16:28.875 "method": "bdev_nvme_attach_controller" 00:16:28.875 } 00:16:28.875 EOF 00:16:28.875 )") 00:16:28.875 04:07:43 -- nvmf/common.sh@543 -- # cat 00:16:28.875 04:07:43 -- nvmf/common.sh@545 -- # jq . 00:16:28.875 04:07:43 -- nvmf/common.sh@546 -- # IFS=, 00:16:28.875 04:07:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:28.875 "params": { 00:16:28.875 "name": "Nvme1", 00:16:28.875 "trtype": "tcp", 00:16:28.875 "traddr": "10.0.0.2", 00:16:28.875 "adrfam": "ipv4", 00:16:28.875 "trsvcid": "4420", 00:16:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.875 "hdgst": false, 00:16:28.875 "ddgst": false 00:16:28.875 }, 00:16:28.875 "method": "bdev_nvme_attach_controller" 00:16:28.875 }' 00:16:28.875 [2024-04-19 04:07:43.288260] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:16:28.875 [2024-04-19 04:07:43.288320] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824186 ] 00:16:28.875 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.875 [2024-04-19 04:07:43.371691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.133 [2024-04-19 04:07:43.460843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.133 [2024-04-19 04:07:43.460946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.133 [2024-04-19 04:07:43.460950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.390 I/O targets: 00:16:29.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:29.390 00:16:29.390 00:16:29.390 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.390 http://cunit.sourceforge.net/ 00:16:29.390 00:16:29.390 00:16:29.390 Suite: bdevio tests on: Nvme1n1 00:16:29.390 Test: blockdev write read block ...passed 00:16:29.390 Test: blockdev write zeroes read block ...passed 00:16:29.390 Test: blockdev write zeroes read no split ...passed 00:16:29.647 Test: blockdev write zeroes read split ...passed 00:16:29.647 Test: blockdev write zeroes read split partial ...passed 00:16:29.647 Test: blockdev reset ...[2024-04-19 04:07:43.987440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:29.647 [2024-04-19 04:07:43.987515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218aac0 (9): Bad file descriptor 00:16:29.647 [2024-04-19 04:07:44.002369] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:29.647 passed 00:16:29.647 Test: blockdev write read 8 blocks ...passed 00:16:29.647 Test: blockdev write read size > 128k ...passed 00:16:29.647 Test: blockdev write read invalid size ...passed 00:16:29.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:29.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:29.647 Test: blockdev write read max offset ...passed 00:16:29.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.647 Test: blockdev writev readv 8 blocks ...passed 00:16:29.647 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.906 Test: blockdev writev readv block ...passed 00:16:29.906 Test: blockdev writev readv size > 128k ...passed 00:16:29.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.906 Test: blockdev comparev and writev ...[2024-04-19 04:07:44.218034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.218075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.218411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.218432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.218771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.218791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.218798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.219128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.219138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.219149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.906 [2024-04-19 04:07:44.219155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.906 passed 00:16:29.906 Test: blockdev nvme passthru rw ...passed 00:16:29.906 Test: blockdev nvme passthru vendor specific ...[2024-04-19 04:07:44.300792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.906 [2024-04-19 04:07:44.300807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.300959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.906 [2024-04-19 04:07:44.300968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.301117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.906 [2024-04-19 04:07:44.301125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.906 [2024-04-19 04:07:44.301266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.906 [2024-04-19 04:07:44.301274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.906 passed 00:16:29.906 Test: blockdev nvme admin passthru ...passed 00:16:29.906 Test: blockdev copy ...passed 00:16:29.906 00:16:29.906 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.906 suites 1 1 n/a 0 0 00:16:29.906 tests 23 23 23 0 0 00:16:29.906 asserts 152 152 152 0 n/a 00:16:29.906 00:16:29.906 Elapsed time = 1.150 seconds 00:16:30.166 04:07:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.166 04:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.166 04:07:44 -- common/autotest_common.sh@10 -- # set +x 00:16:30.166 04:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.166 04:07:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:30.166 04:07:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:30.166 04:07:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:30.166 04:07:44 -- nvmf/common.sh@117 -- # sync 00:16:30.166 04:07:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:30.166 04:07:44 -- nvmf/common.sh@120 -- # set +e 00:16:30.166 04:07:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:30.166 04:07:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:30.166 rmmod nvme_tcp 00:16:30.166 rmmod nvme_fabrics 00:16:30.166 rmmod nvme_keyring 00:16:30.166 04:07:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:30.166 04:07:44 -- nvmf/common.sh@124 -- # set -e 00:16:30.166 04:07:44 -- nvmf/common.sh@125 -- # return 0 00:16:30.166 04:07:44 -- nvmf/common.sh@478 -- # '[' -n 3824019 ']' 00:16:30.166 04:07:44 -- nvmf/common.sh@479 -- # killprocess 3824019 00:16:30.166 04:07:44 -- common/autotest_common.sh@936 -- # '[' -z 3824019 ']' 00:16:30.166 04:07:44 -- common/autotest_common.sh@940 -- # kill -0 3824019 00:16:30.166 04:07:44 -- common/autotest_common.sh@941 -- # uname 00:16:30.166 04:07:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.166 04:07:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3824019 00:16:30.166 04:07:44 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:30.166 04:07:44 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:30.166 04:07:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3824019' 00:16:30.166 killing process with pid 3824019 00:16:30.166 04:07:44 -- common/autotest_common.sh@955 -- # kill 3824019 00:16:30.166 04:07:44 -- common/autotest_common.sh@960 -- # wait 3824019 00:16:30.425 04:07:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:30.425 04:07:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:30.425 04:07:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:30.425 04:07:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.425 04:07:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.425 04:07:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.425 04:07:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.425 04:07:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.962 04:07:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.962 00:16:32.962 real 0m10.704s 00:16:32.962 user 0m14.419s 00:16:32.962 sys 0m4.846s 00:16:32.962 04:07:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.962 04:07:46 -- common/autotest_common.sh@10 -- # set +x 00:16:32.962 ************************************ 00:16:32.962 END TEST nvmf_bdevio 00:16:32.962 ************************************ 00:16:32.962 04:07:47 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:32.962 04:07:47 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:32.962 04:07:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:32.962 04:07:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.962 04:07:47 -- common/autotest_common.sh@10 -- # set +x 00:16:32.962 ************************************ 00:16:32.962 START TEST nvmf_bdevio_no_huge 00:16:32.962 ************************************ 00:16:32.962 04:07:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:32.962 * Looking for test storage... 00:16:32.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.962 04:07:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.962 04:07:47 -- nvmf/common.sh@7 -- # uname -s 00:16:32.962 04:07:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.962 04:07:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.962 04:07:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.962 04:07:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.962 04:07:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.962 04:07:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.962 04:07:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.962 04:07:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.962 04:07:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.962 04:07:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.962 04:07:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:32.962 04:07:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:32.962 04:07:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.962 04:07:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.962 04:07:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.962 04:07:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.962 04:07:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.962 04:07:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.962 04:07:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.962 04:07:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.962 04:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.962 04:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.963 04:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.963 04:07:47 -- paths/export.sh@5 -- # export PATH 00:16:32.963 04:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.963 04:07:47 -- nvmf/common.sh@47 -- # : 0 00:16:32.963 04:07:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.963 04:07:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.963 04:07:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.963 04:07:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.963 04:07:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.963 04:07:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.963 04:07:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.963 04:07:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.963 04:07:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.963 04:07:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:32.963 04:07:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:32.963 04:07:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:32.963 04:07:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.963 04:07:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:32.963 04:07:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:32.963 04:07:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:32.963 04:07:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.963 04:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.963 04:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.963 04:07:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:32.963 04:07:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:32.963 04:07:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.963 04:07:47 -- common/autotest_common.sh@10 -- # set +x 00:16:38.236 04:07:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:38.236 04:07:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.236 04:07:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.236 04:07:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.236 04:07:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.236 04:07:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.236 04:07:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.236 04:07:52 -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.236 04:07:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.236 04:07:52 -- nvmf/common.sh@296 -- # e810=() 00:16:38.236 04:07:52 -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.236 04:07:52 -- nvmf/common.sh@297 -- # x722=() 00:16:38.236 04:07:52 -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.236 04:07:52 -- nvmf/common.sh@298 -- # mlx=() 00:16:38.236 04:07:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.236 04:07:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.236 04:07:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.236 04:07:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.236 04:07:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.236 04:07:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:38.236 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:38.236 04:07:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.236 04:07:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:38.236 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:38.236 04:07:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.236 04:07:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.236 04:07:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.236 04:07:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:38.236 Found net devices under 0000:af:00.0: cvl_0_0 00:16:38.236 04:07:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.236 04:07:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.236 04:07:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.236 04:07:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.236 04:07:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:38.236 Found net devices under 0000:af:00.1: cvl_0_1 00:16:38.236 04:07:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.236 04:07:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:38.236 04:07:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:38.236 04:07:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:38.236 04:07:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.236 04:07:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.236 04:07:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.236 04:07:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.236 04:07:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.236 04:07:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.236 04:07:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.236 04:07:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.236 04:07:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.236 04:07:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.236 04:07:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.236 04:07:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.236 04:07:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.236 04:07:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.236 04:07:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.236 04:07:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.236 04:07:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.496 04:07:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.496 04:07:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.496 04:07:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:38.496 00:16:38.496 --- 10.0.0.2 ping statistics --- 00:16:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.496 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:38.496 04:07:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:16:38.496 00:16:38.496 --- 10.0.0.1 ping statistics --- 00:16:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.496 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:16:38.496 04:07:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.496 04:07:52 -- nvmf/common.sh@411 -- # return 0 00:16:38.496 04:07:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:38.496 04:07:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.496 04:07:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:38.496 04:07:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:38.496 04:07:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.496 04:07:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:38.496 04:07:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:38.496 04:07:52 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:38.496 04:07:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:38.496 04:07:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:38.496 04:07:52 -- common/autotest_common.sh@10 -- # set +x 00:16:38.496 04:07:52 -- nvmf/common.sh@470 -- # nvmfpid=3828047 00:16:38.496 04:07:52 -- nvmf/common.sh@471 -- # waitforlisten 3828047 00:16:38.496 04:07:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:38.496 04:07:52 -- common/autotest_common.sh@817 -- # '[' -z 3828047 ']' 00:16:38.496 04:07:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.496 04:07:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:38.496 04:07:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.496 04:07:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:38.496 04:07:52 -- common/autotest_common.sh@10 -- # set +x 00:16:38.496 [2024-04-19 04:07:52.946870] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:16:38.496 [2024-04-19 04:07:52.946930] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:38.754 [2024-04-19 04:07:53.042080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.754 [2024-04-19 04:07:53.157110] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.754 [2024-04-19 04:07:53.157147] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.754 [2024-04-19 04:07:53.157157] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.754 [2024-04-19 04:07:53.157165] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.754 [2024-04-19 04:07:53.157173] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.754 [2024-04-19 04:07:53.157286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:38.754 [2024-04-19 04:07:53.157399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:38.754 [2024-04-19 04:07:53.157512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.754 [2024-04-19 04:07:53.157512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:39.689 04:07:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.689 04:07:53 -- common/autotest_common.sh@850 -- # return 0 00:16:39.689 04:07:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:39.689 04:07:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 04:07:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.689 04:07:53 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.689 04:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 [2024-04-19 04:07:53.930263] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.689 04:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.689 04:07:53 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.689 04:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 Malloc0 00:16:39.689 04:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.689 04:07:53 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.689 04:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 04:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.689 04:07:53 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.689 04:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 04:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.689 04:07:53 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.689 04:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.689 04:07:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 [2024-04-19 04:07:53.980614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.689 04:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.689 04:07:53 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:39.689 04:07:53 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:39.689 04:07:53 -- nvmf/common.sh@521 -- # config=() 00:16:39.689 04:07:53 -- nvmf/common.sh@521 -- # local subsystem config 00:16:39.689 04:07:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:39.689 04:07:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:39.689 { 00:16:39.689 "params": { 00:16:39.689 "name": "Nvme$subsystem", 00:16:39.689 "trtype": "$TEST_TRANSPORT", 00:16:39.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:39.689 "adrfam": "ipv4", 00:16:39.689 "trsvcid": "$NVMF_PORT", 00:16:39.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:39.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:39.689 "hdgst": ${hdgst:-false}, 00:16:39.689 "ddgst": ${ddgst:-false} 00:16:39.689 }, 00:16:39.689 "method": "bdev_nvme_attach_controller" 00:16:39.689 } 00:16:39.689 EOF 00:16:39.689 )") 00:16:39.689 04:07:53 -- nvmf/common.sh@543 -- # cat 00:16:39.689 04:07:53 -- nvmf/common.sh@545 -- # jq . 00:16:39.689 04:07:53 -- nvmf/common.sh@546 -- # IFS=, 00:16:39.689 04:07:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:39.689 "params": { 00:16:39.689 "name": "Nvme1", 00:16:39.689 "trtype": "tcp", 00:16:39.689 "traddr": "10.0.0.2", 00:16:39.689 "adrfam": "ipv4", 00:16:39.689 "trsvcid": "4420", 00:16:39.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.689 "hdgst": false, 00:16:39.689 "ddgst": false 00:16:39.689 }, 00:16:39.689 "method": "bdev_nvme_attach_controller" 00:16:39.689 }' 00:16:39.689 [2024-04-19 04:07:54.030287] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:16:39.689 [2024-04-19 04:07:54.030359] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3828327 ] 00:16:39.689 [2024-04-19 04:07:54.115616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.948 [2024-04-19 04:07:54.232406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.948 [2024-04-19 04:07:54.232506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.948 [2024-04-19 04:07:54.232507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.207 I/O targets: 00:16:40.207 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:40.207 00:16:40.207 00:16:40.207 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.207 http://cunit.sourceforge.net/ 00:16:40.207 00:16:40.207 00:16:40.207 Suite: bdevio tests on: Nvme1n1 00:16:40.207 Test: blockdev write read block ...passed 00:16:40.207 Test: blockdev write zeroes read block ...passed 00:16:40.207 Test: blockdev write zeroes read no split ...passed 00:16:40.207 Test: blockdev write zeroes read split ...passed 00:16:40.207 Test: blockdev write zeroes read split partial ...passed 00:16:40.207 Test: blockdev reset ...[2024-04-19 04:07:54.727438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:40.207 [2024-04-19 04:07:54.727516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158c20 (9): Bad file descriptor 00:16:40.464 [2024-04-19 04:07:54.744349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:40.464 passed 00:16:40.464 Test: blockdev write read 8 blocks ...passed 00:16:40.464 Test: blockdev write read size > 128k ...passed 00:16:40.464 Test: blockdev write read invalid size ...passed 00:16:40.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:40.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:40.464 Test: blockdev write read max offset ...passed 00:16:40.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.464 Test: blockdev writev readv 8 blocks ...passed 00:16:40.722 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.722 Test: blockdev writev readv block ...passed 00:16:40.722 Test: blockdev writev readv size > 128k ...passed 00:16:40.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.722 Test: blockdev comparev and writev ...[2024-04-19 04:07:55.039821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.039848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.039860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.039867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.040896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:40.722 [2024-04-19 04:07:55.040902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:40.722 passed 00:16:40.722 Test: blockdev nvme passthru rw ...passed 00:16:40.722 Test: blockdev nvme passthru vendor specific ...[2024-04-19 04:07:55.122777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.722 [2024-04-19 04:07:55.122793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.122949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.722 [2024-04-19 04:07:55.122963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.123110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.722 [2024-04-19 04:07:55.123118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:40.722 [2024-04-19 04:07:55.123272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:40.722 [2024-04-19 04:07:55.123281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:40.722 passed 00:16:40.722 Test: blockdev nvme admin passthru ...passed 00:16:40.722 Test: blockdev copy ...passed 00:16:40.722 00:16:40.722 Run Summary: Type Total Ran Passed Failed Inactive 00:16:40.722 suites 1 1 n/a 0 0 00:16:40.722 tests 23 23 23 0 0 00:16:40.722 asserts 152 152 152 0 n/a 00:16:40.722 00:16:40.722 Elapsed time = 1.252 seconds 00:16:41.289 04:07:55 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.289 04:07:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.289 04:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:41.289 04:07:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.289 04:07:55 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:41.289 04:07:55 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:41.289 04:07:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:41.289 04:07:55 -- nvmf/common.sh@117 -- # sync 00:16:41.289 04:07:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.289 04:07:55 -- nvmf/common.sh@120 -- # set +e 00:16:41.289 04:07:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.289 04:07:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.289 rmmod nvme_tcp 00:16:41.289 rmmod nvme_fabrics 00:16:41.289 rmmod nvme_keyring 00:16:41.289 04:07:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.289 04:07:55 -- nvmf/common.sh@124 -- # set -e 00:16:41.289 04:07:55 -- nvmf/common.sh@125 -- # return 0 00:16:41.289 04:07:55 -- nvmf/common.sh@478 -- # '[' -n 3828047 ']' 00:16:41.289 04:07:55 -- nvmf/common.sh@479 -- # killprocess 3828047 00:16:41.289 04:07:55 -- common/autotest_common.sh@936 -- # '[' -z 3828047 ']' 00:16:41.289 04:07:55 -- common/autotest_common.sh@940 -- # kill -0 3828047 00:16:41.289 04:07:55 -- common/autotest_common.sh@941 -- # uname 00:16:41.289 04:07:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.289 04:07:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3828047 00:16:41.289 04:07:55 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:41.289 04:07:55 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:41.289 04:07:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3828047' 00:16:41.289 killing process with pid 3828047 00:16:41.289 04:07:55 -- common/autotest_common.sh@955 -- # kill 3828047 00:16:41.289 04:07:55 -- common/autotest_common.sh@960 -- # wait 3828047 00:16:41.856 04:07:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:41.856 04:07:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:41.856 04:07:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:41.856 04:07:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.856 04:07:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.856 04:07:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.856 04:07:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.856 04:07:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.821 04:07:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.821 00:16:43.821 real 0m11.004s 00:16:43.821 user 0m15.681s 00:16:43.821 sys 0m5.339s 00:16:43.821 04:07:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.821 04:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.821 ************************************ 00:16:43.821 END TEST nvmf_bdevio_no_huge 00:16:43.821 ************************************ 00:16:43.821 04:07:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:43.821 04:07:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.821 04:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.821 04:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.821 ************************************ 00:16:43.821 START TEST nvmf_tls 00:16:43.821 ************************************ 00:16:43.821 04:07:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:44.080 * Looking for test storage... 00:16:44.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.080 04:07:58 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.080 04:07:58 -- nvmf/common.sh@7 -- # uname -s 00:16:44.080 04:07:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.080 04:07:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.080 04:07:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.080 04:07:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.080 04:07:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.080 04:07:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.080 04:07:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.080 04:07:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.080 04:07:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.080 04:07:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.080 04:07:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:44.080 04:07:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:44.080 04:07:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.080 04:07:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.080 04:07:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.080 04:07:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.080 04:07:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.080 04:07:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.080 04:07:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.080 04:07:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.080 04:07:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.080 04:07:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.080 04:07:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.080 04:07:58 -- paths/export.sh@5 -- # export PATH 00:16:44.080 04:07:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.080 04:07:58 -- nvmf/common.sh@47 -- # : 0 00:16:44.080 04:07:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.080 04:07:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.080 04:07:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.080 04:07:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.080 04:07:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.080 04:07:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.080 04:07:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.080 04:07:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.080 04:07:58 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.080 04:07:58 -- target/tls.sh@62 -- # nvmftestinit 00:16:44.080 04:07:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:44.080 04:07:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.080 04:07:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.080 04:07:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.080 04:07:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.080 04:07:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.080 04:07:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.080 04:07:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.081 04:07:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:44.081 04:07:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:44.081 04:07:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.081 04:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:49.354 04:08:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:49.354 04:08:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.354 04:08:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.354 04:08:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.354 04:08:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.354 04:08:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.354 04:08:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.354 04:08:03 -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.354 04:08:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.354 04:08:03 -- nvmf/common.sh@296 -- # e810=() 00:16:49.354 04:08:03 -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.354 04:08:03 -- nvmf/common.sh@297 -- # x722=() 00:16:49.354 04:08:03 -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.354 04:08:03 -- nvmf/common.sh@298 -- # mlx=() 00:16:49.354 04:08:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.354 04:08:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.354 04:08:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.355 04:08:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.355 04:08:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.355 04:08:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.355 04:08:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.355 04:08:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.355 04:08:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.355 04:08:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.355 04:08:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:49.355 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:49.355 04:08:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.355 04:08:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:49.355 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:49.355 04:08:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.355 04:08:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.355 04:08:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.355 04:08:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:49.355 Found net devices under 0000:af:00.0: cvl_0_0 00:16:49.355 04:08:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.355 04:08:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.355 04:08:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.355 04:08:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.355 04:08:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:49.355 Found net devices under 0000:af:00.1: cvl_0_1 00:16:49.355 04:08:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.355 04:08:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:49.355 04:08:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:49.355 04:08:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:49.355 04:08:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.355 04:08:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.355 04:08:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.355 04:08:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.355 04:08:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.355 04:08:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.355 04:08:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.355 04:08:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.355 04:08:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.355 04:08:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.355 04:08:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.355 04:08:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.355 04:08:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.613 04:08:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.613 04:08:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.613 04:08:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.613 04:08:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.613 04:08:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.613 04:08:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.613 04:08:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:16:49.613 00:16:49.613 --- 10.0.0.2 ping statistics --- 00:16:49.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.613 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:16:49.613 04:08:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:16:49.613 00:16:49.613 --- 10.0.0.1 ping statistics --- 00:16:49.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.613 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:16:49.613 04:08:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.613 04:08:04 -- nvmf/common.sh@411 -- # return 0 00:16:49.613 04:08:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:49.613 04:08:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.613 04:08:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:49.613 04:08:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:49.613 04:08:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.613 04:08:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:49.613 04:08:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:49.872 04:08:04 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:49.872 04:08:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:49.872 04:08:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:49.872 04:08:04 -- common/autotest_common.sh@10 -- # set +x 00:16:49.872 04:08:04 -- nvmf/common.sh@470 -- # nvmfpid=3832241 00:16:49.872 04:08:04 -- nvmf/common.sh@471 -- # waitforlisten 3832241 00:16:49.872 04:08:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:49.872 04:08:04 -- common/autotest_common.sh@817 -- # '[' -z 3832241 ']' 00:16:49.872 04:08:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.872 04:08:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.872 04:08:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.872 04:08:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.872 04:08:04 -- common/autotest_common.sh@10 -- # set +x 00:16:49.872 [2024-04-19 04:08:04.212141] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:16:49.872 [2024-04-19 04:08:04.212197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.872 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.872 [2024-04-19 04:08:04.293877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.872 [2024-04-19 04:08:04.382112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.872 [2024-04-19 04:08:04.382156] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.872 [2024-04-19 04:08:04.382166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.872 [2024-04-19 04:08:04.382174] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.872 [2024-04-19 04:08:04.382182] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.872 [2024-04-19 04:08:04.382210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.131 04:08:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.131 04:08:04 -- common/autotest_common.sh@850 -- # return 0 00:16:50.131 04:08:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:50.131 04:08:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:50.131 04:08:04 -- common/autotest_common.sh@10 -- # set +x 00:16:50.131 04:08:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.131 04:08:04 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:50.131 04:08:04 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:50.388 true 00:16:50.388 04:08:04 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.388 04:08:04 -- target/tls.sh@73 -- # jq -r .tls_version 00:16:50.647 04:08:04 -- target/tls.sh@73 -- # version=0 00:16:50.647 04:08:04 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:50.647 04:08:04 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:50.647 04:08:05 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:50.647 04:08:05 -- target/tls.sh@81 -- # jq -r .tls_version 00:16:50.905 04:08:05 -- target/tls.sh@81 -- # version=13 00:16:50.905 04:08:05 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:50.905 04:08:05 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:51.164 04:08:05 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.164 04:08:05 -- target/tls.sh@89 -- # jq -r .tls_version 00:16:51.422 04:08:05 -- target/tls.sh@89 -- # version=7 00:16:51.422 04:08:05 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:51.422 04:08:05 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.422 04:08:05 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:51.681 04:08:06 -- target/tls.sh@96 -- # ktls=false 00:16:51.681 04:08:06 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:51.681 04:08:06 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:51.946 04:08:06 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.946 04:08:06 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:52.204 04:08:06 -- target/tls.sh@104 -- # ktls=true 00:16:52.204 04:08:06 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:52.204 04:08:06 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:52.462 04:08:06 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.462 04:08:06 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:52.720 04:08:07 -- target/tls.sh@112 -- # ktls=false 00:16:52.720 04:08:07 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:52.720 04:08:07 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:52.720 04:08:07 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:52.720 04:08:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:52.720 04:08:07 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:52.721 04:08:07 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:16:52.721 04:08:07 -- nvmf/common.sh@693 -- # digest=1 00:16:52.721 04:08:07 -- nvmf/common.sh@694 -- # python - 00:16:52.721 04:08:07 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.721 04:08:07 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:52.721 04:08:07 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:52.721 04:08:07 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:52.721 04:08:07 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:52.721 04:08:07 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:16:52.721 04:08:07 -- nvmf/common.sh@693 -- # digest=1 00:16:52.721 04:08:07 -- nvmf/common.sh@694 -- # python - 00:16:52.721 04:08:07 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.721 04:08:07 -- target/tls.sh@121 -- # mktemp 00:16:52.721 04:08:07 -- target/tls.sh@121 -- # key_path=/tmp/tmp.7jVDmAUfg0 00:16:52.721 04:08:07 -- target/tls.sh@122 -- # mktemp 00:16:52.721 04:08:07 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.gV6RP3K7CE 00:16:52.721 04:08:07 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:52.721 04:08:07 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:52.721 04:08:07 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.7jVDmAUfg0 00:16:52.721 04:08:07 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gV6RP3K7CE 00:16:52.721 04:08:07 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.980 04:08:07 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:53.239 04:08:07 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.7jVDmAUfg0 00:16:53.239 04:08:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.7jVDmAUfg0 00:16:53.239 04:08:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.497 [2024-04-19 04:08:07.954845] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.497 04:08:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.755 04:08:08 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:54.013 [2024-04-19 04:08:08.432121] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.013 [2024-04-19 04:08:08.432334] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.013 04:08:08 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:54.272 malloc0 00:16:54.272 04:08:08 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.530 04:08:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7jVDmAUfg0 00:16:54.789 [2024-04-19 04:08:09.159350] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:54.789 04:08:09 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7jVDmAUfg0 00:16:54.789 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.767 Initializing NVMe Controllers 00:17:04.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:04.767 Initialization complete. Launching workers. 00:17:04.767 ======================================================== 00:17:04.767 Latency(us) 00:17:04.767 Device Information : IOPS MiB/s Average min max 00:17:04.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10713.26 41.85 5975.08 1199.57 7470.59 00:17:04.767 ======================================================== 00:17:04.767 Total : 10713.26 41.85 5975.08 1199.57 7470.59 00:17:04.767 00:17:05.026 04:08:19 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7jVDmAUfg0 00:17:05.026 04:08:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.026 04:08:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.026 04:08:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.026 04:08:19 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7jVDmAUfg0' 00:17:05.026 04:08:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.026 04:08:19 -- target/tls.sh@28 -- # bdevperf_pid=3835004 00:17:05.026 04:08:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.026 04:08:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.026 04:08:19 -- target/tls.sh@31 -- # waitforlisten 3835004 /var/tmp/bdevperf.sock 00:17:05.026 04:08:19 -- common/autotest_common.sh@817 -- # '[' -z 3835004 ']' 00:17:05.026 04:08:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.026 04:08:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:05.026 04:08:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.026 04:08:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:05.026 04:08:19 -- common/autotest_common.sh@10 -- # set +x 00:17:05.026 [2024-04-19 04:08:19.346139] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:05.026 [2024-04-19 04:08:19.346201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835004 ] 00:17:05.026 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.026 [2024-04-19 04:08:19.403293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.026 [2024-04-19 04:08:19.474611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.285 04:08:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:05.285 04:08:19 -- common/autotest_common.sh@850 -- # return 0 00:17:05.285 04:08:19 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7jVDmAUfg0 00:17:05.285 [2024-04-19 04:08:19.702470] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.285 [2024-04-19 04:08:19.702529] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:05.285 TLSTESTn1 00:17:05.285 04:08:19 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:05.543 Running I/O for 10 seconds... 00:17:15.520 00:17:15.520 Latency(us) 00:17:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:15.520 Verification LBA range: start 0x0 length 0x2000 00:17:15.520 TLSTESTn1 : 10.02 5111.78 19.97 0.00 0.00 24998.24 6315.29 37653.41 00:17:15.520 =================================================================================================================== 00:17:15.520 Total : 5111.78 19.97 0.00 0.00 24998.24 6315.29 37653.41 00:17:15.520 0 00:17:15.520 04:08:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:15.520 04:08:29 -- target/tls.sh@45 -- # killprocess 3835004 00:17:15.520 04:08:29 -- common/autotest_common.sh@936 -- # '[' -z 3835004 ']' 00:17:15.520 04:08:29 -- common/autotest_common.sh@940 -- # kill -0 3835004 00:17:15.520 04:08:29 -- common/autotest_common.sh@941 -- # uname 00:17:15.520 04:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.520 04:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3835004 00:17:15.520 04:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:15.520 04:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:15.520 04:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3835004' 00:17:15.520 killing process with pid 3835004 00:17:15.520 04:08:30 -- common/autotest_common.sh@955 -- # kill 3835004 00:17:15.520 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.520 00:17:15.520 Latency(us) 00:17:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.520 =================================================================================================================== 00:17:15.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.520 [2024-04-19 04:08:30.029213] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:15.520 04:08:30 -- common/autotest_common.sh@960 -- # wait 3835004 00:17:15.779 04:08:30 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gV6RP3K7CE 00:17:15.779 04:08:30 -- common/autotest_common.sh@638 -- # local es=0 00:17:15.779 04:08:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gV6RP3K7CE 00:17:15.779 04:08:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:15.779 04:08:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:15.779 04:08:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:15.779 04:08:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:15.779 04:08:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gV6RP3K7CE 00:17:15.779 04:08:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.779 04:08:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.779 04:08:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:15.779 04:08:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gV6RP3K7CE' 00:17:15.779 04:08:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.779 04:08:30 -- target/tls.sh@28 -- # bdevperf_pid=3836850 00:17:15.779 04:08:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.779 04:08:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.779 04:08:30 -- target/tls.sh@31 -- # waitforlisten 3836850 /var/tmp/bdevperf.sock 00:17:15.779 04:08:30 -- common/autotest_common.sh@817 -- # '[' -z 3836850 ']' 00:17:15.779 04:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.779 04:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.779 04:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.779 04:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.779 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:15.779 [2024-04-19 04:08:30.278528] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:15.779 [2024-04-19 04:08:30.278588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3836850 ] 00:17:16.039 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.039 [2024-04-19 04:08:30.336055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.039 [2024-04-19 04:08:30.404104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.039 04:08:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.039 04:08:30 -- common/autotest_common.sh@850 -- # return 0 00:17:16.039 04:08:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gV6RP3K7CE 00:17:16.316 [2024-04-19 04:08:30.719205] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.317 [2024-04-19 04:08:30.719270] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:16.317 [2024-04-19 04:08:30.730328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:16.317 [2024-04-19 04:08:30.730449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193e900 (107): Transport endpoint is not connected 00:17:16.317 [2024-04-19 04:08:30.731421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193e900 (9): Bad file descriptor 00:17:16.317 [2024-04-19 04:08:30.732422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:16.317 [2024-04-19 04:08:30.732432] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:16.317 [2024-04-19 04:08:30.732438] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.317 request: 00:17:16.317 { 00:17:16.317 "name": "TLSTEST", 00:17:16.317 "trtype": "tcp", 00:17:16.317 "traddr": "10.0.0.2", 00:17:16.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.317 "adrfam": "ipv4", 00:17:16.317 "trsvcid": "4420", 00:17:16.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.317 "psk": "/tmp/tmp.gV6RP3K7CE", 00:17:16.317 "method": "bdev_nvme_attach_controller", 00:17:16.317 "req_id": 1 00:17:16.317 } 00:17:16.317 Got JSON-RPC error response 00:17:16.317 response: 00:17:16.317 { 00:17:16.317 "code": -32602, 00:17:16.317 "message": "Invalid parameters" 00:17:16.317 } 00:17:16.317 04:08:30 -- target/tls.sh@36 -- # killprocess 3836850 00:17:16.317 04:08:30 -- common/autotest_common.sh@936 -- # '[' -z 3836850 ']' 00:17:16.317 04:08:30 -- common/autotest_common.sh@940 -- # kill -0 3836850 00:17:16.317 04:08:30 -- common/autotest_common.sh@941 -- # uname 00:17:16.317 04:08:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:16.317 04:08:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3836850 00:17:16.317 04:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:16.317 04:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:16.317 04:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3836850' 00:17:16.317 killing process with pid 3836850 00:17:16.317 04:08:30 -- common/autotest_common.sh@955 -- # kill 3836850 00:17:16.317 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.317 00:17:16.317 Latency(us) 00:17:16.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.317 =================================================================================================================== 00:17:16.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.317 [2024-04-19 04:08:30.804203] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:16.317 04:08:30 -- common/autotest_common.sh@960 -- # wait 3836850 00:17:16.585 04:08:30 -- target/tls.sh@37 -- # return 1 00:17:16.585 04:08:30 -- common/autotest_common.sh@641 -- # es=1 00:17:16.585 04:08:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:16.585 04:08:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:16.585 04:08:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:16.585 04:08:30 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7jVDmAUfg0 00:17:16.585 04:08:30 -- common/autotest_common.sh@638 -- # local es=0 00:17:16.585 04:08:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7jVDmAUfg0 00:17:16.585 04:08:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:16.585 04:08:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:16.585 04:08:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:16.585 04:08:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:16.585 04:08:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7jVDmAUfg0 00:17:16.585 04:08:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.585 04:08:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.585 04:08:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:16.585 04:08:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7jVDmAUfg0' 00:17:16.585 04:08:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.585 04:08:30 -- target/tls.sh@28 -- # bdevperf_pid=3837115 00:17:16.585 04:08:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.585 04:08:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.585 04:08:31 -- target/tls.sh@31 -- # waitforlisten 3837115 /var/tmp/bdevperf.sock 00:17:16.585 04:08:31 -- common/autotest_common.sh@817 -- # '[' -z 3837115 ']' 00:17:16.585 04:08:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.585 04:08:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.585 04:08:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.585 04:08:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.585 04:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:16.585 [2024-04-19 04:08:31.047609] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:16.585 [2024-04-19 04:08:31.047674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837115 ] 00:17:16.585 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.585 [2024-04-19 04:08:31.105785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.843 [2024-04-19 04:08:31.169639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.411 04:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:17.411 04:08:31 -- common/autotest_common.sh@850 -- # return 0 00:17:17.411 04:08:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.7jVDmAUfg0 00:17:17.670 [2024-04-19 04:08:32.057839] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.670 [2024-04-19 04:08:32.057930] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:17.670 [2024-04-19 04:08:32.063707] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:17.670 [2024-04-19 04:08:32.063736] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:17.670 [2024-04-19 04:08:32.063767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:17.670 [2024-04-19 04:08:32.064076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe6900 (107): Transport endpoint is not connected 00:17:17.670 [2024-04-19 04:08:32.065069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe6900 (9): Bad file descriptor 00:17:17.670 [2024-04-19 04:08:32.066070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:17.670 [2024-04-19 04:08:32.066079] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:17.670 [2024-04-19 04:08:32.066085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.670 request: 00:17:17.670 { 00:17:17.670 "name": "TLSTEST", 00:17:17.670 "trtype": "tcp", 00:17:17.670 "traddr": "10.0.0.2", 00:17:17.670 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.670 "adrfam": "ipv4", 00:17:17.670 "trsvcid": "4420", 00:17:17.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.670 "psk": "/tmp/tmp.7jVDmAUfg0", 00:17:17.670 "method": "bdev_nvme_attach_controller", 00:17:17.670 "req_id": 1 00:17:17.670 } 00:17:17.670 Got JSON-RPC error response 00:17:17.670 response: 00:17:17.670 { 00:17:17.670 "code": -32602, 00:17:17.670 "message": "Invalid parameters" 00:17:17.670 } 00:17:17.670 04:08:32 -- target/tls.sh@36 -- # killprocess 3837115 00:17:17.670 04:08:32 -- common/autotest_common.sh@936 -- # '[' -z 3837115 ']' 00:17:17.670 04:08:32 -- common/autotest_common.sh@940 -- # kill -0 3837115 00:17:17.670 04:08:32 -- common/autotest_common.sh@941 -- # uname 00:17:17.670 04:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.670 04:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3837115 00:17:17.670 04:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:17.670 04:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:17.670 04:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3837115' 00:17:17.670 killing process with pid 3837115 00:17:17.670 04:08:32 -- common/autotest_common.sh@955 -- # kill 3837115 00:17:17.670 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.670 00:17:17.670 Latency(us) 00:17:17.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.670 =================================================================================================================== 00:17:17.670 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.670 [2024-04-19 04:08:32.143715] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:17.670 04:08:32 -- common/autotest_common.sh@960 -- # wait 3837115 00:17:17.929 04:08:32 -- target/tls.sh@37 -- # return 1 00:17:17.929 04:08:32 -- common/autotest_common.sh@641 -- # es=1 00:17:17.929 04:08:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:17.929 04:08:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:17.929 04:08:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:17.929 04:08:32 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7jVDmAUfg0 00:17:17.929 04:08:32 -- common/autotest_common.sh@638 -- # local es=0 00:17:17.929 04:08:32 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7jVDmAUfg0 00:17:17.929 04:08:32 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:17.929 04:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:17.929 04:08:32 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:17.929 04:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:17.929 04:08:32 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7jVDmAUfg0 00:17:17.929 04:08:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:17.929 04:08:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:17.929 04:08:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:17.929 04:08:32 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7jVDmAUfg0' 00:17:17.929 04:08:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.929 04:08:32 -- target/tls.sh@28 -- # bdevperf_pid=3837384 00:17:17.929 04:08:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.929 04:08:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.929 04:08:32 -- target/tls.sh@31 -- # waitforlisten 3837384 /var/tmp/bdevperf.sock 00:17:17.929 04:08:32 -- common/autotest_common.sh@817 -- # '[' -z 3837384 ']' 00:17:17.929 04:08:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.929 04:08:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.929 04:08:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.929 04:08:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.929 04:08:32 -- common/autotest_common.sh@10 -- # set +x 00:17:17.929 [2024-04-19 04:08:32.379939] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:17.929 [2024-04-19 04:08:32.379985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837384 ] 00:17:17.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.929 [2024-04-19 04:08:32.424269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.187 [2024-04-19 04:08:32.488148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.187 04:08:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.187 04:08:32 -- common/autotest_common.sh@850 -- # return 0 00:17:18.187 04:08:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7jVDmAUfg0 00:17:18.447 [2024-04-19 04:08:32.738772] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:18.447 [2024-04-19 04:08:32.738840] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:18.447 [2024-04-19 04:08:32.745467] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:18.447 [2024-04-19 04:08:32.745494] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:18.447 [2024-04-19 04:08:32.745524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:18.447 [2024-04-19 04:08:32.745993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2900 (107): Transport endpoint is not connected 00:17:18.447 [2024-04-19 04:08:32.746986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a2900 (9): Bad file descriptor 00:17:18.447 [2024-04-19 04:08:32.747988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:18.447 [2024-04-19 04:08:32.747996] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:18.447 [2024-04-19 04:08:32.748001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:18.447 request: 00:17:18.447 { 00:17:18.447 "name": "TLSTEST", 00:17:18.447 "trtype": "tcp", 00:17:18.447 "traddr": "10.0.0.2", 00:17:18.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.447 "adrfam": "ipv4", 00:17:18.447 "trsvcid": "4420", 00:17:18.447 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:18.447 "psk": "/tmp/tmp.7jVDmAUfg0", 00:17:18.447 "method": "bdev_nvme_attach_controller", 00:17:18.447 "req_id": 1 00:17:18.447 } 00:17:18.447 Got JSON-RPC error response 00:17:18.447 response: 00:17:18.447 { 00:17:18.447 "code": -32602, 00:17:18.447 "message": "Invalid parameters" 00:17:18.447 } 00:17:18.447 04:08:32 -- target/tls.sh@36 -- # killprocess 3837384 00:17:18.447 04:08:32 -- common/autotest_common.sh@936 -- # '[' -z 3837384 ']' 00:17:18.447 04:08:32 -- common/autotest_common.sh@940 -- # kill -0 3837384 00:17:18.447 04:08:32 -- common/autotest_common.sh@941 -- # uname 00:17:18.447 04:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.447 04:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3837384 00:17:18.447 04:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.447 04:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.447 04:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3837384' 00:17:18.447 killing process with pid 3837384 00:17:18.447 04:08:32 -- common/autotest_common.sh@955 -- # kill 3837384 00:17:18.447 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.447 00:17:18.447 Latency(us) 00:17:18.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.447 =================================================================================================================== 00:17:18.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:18.447 [2024-04-19 04:08:32.827861] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:18.447 04:08:32 -- common/autotest_common.sh@960 -- # wait 3837384 00:17:18.706 04:08:33 -- target/tls.sh@37 -- # return 1 00:17:18.706 04:08:33 -- common/autotest_common.sh@641 -- # es=1 00:17:18.706 04:08:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:18.706 04:08:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:18.706 04:08:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:18.706 04:08:33 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.706 04:08:33 -- common/autotest_common.sh@638 -- # local es=0 00:17:18.706 04:08:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.706 04:08:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:18.706 04:08:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:18.706 04:08:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:18.706 04:08:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:18.706 04:08:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:18.706 04:08:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.706 04:08:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.706 04:08:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.706 04:08:33 -- target/tls.sh@23 -- # psk= 00:17:18.706 04:08:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.706 04:08:33 -- target/tls.sh@28 -- # bdevperf_pid=3837406 00:17:18.706 04:08:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.706 04:08:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.706 04:08:33 -- target/tls.sh@31 -- # waitforlisten 3837406 /var/tmp/bdevperf.sock 00:17:18.706 04:08:33 -- common/autotest_common.sh@817 -- # '[' -z 3837406 ']' 00:17:18.706 04:08:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.706 04:08:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.706 04:08:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.706 04:08:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.706 04:08:33 -- common/autotest_common.sh@10 -- # set +x 00:17:18.706 [2024-04-19 04:08:33.061859] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:18.706 [2024-04-19 04:08:33.061903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837406 ] 00:17:18.706 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.706 [2024-04-19 04:08:33.108651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.706 [2024-04-19 04:08:33.170410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.964 04:08:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.964 04:08:33 -- common/autotest_common.sh@850 -- # return 0 00:17:18.964 04:08:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:18.964 [2024-04-19 04:08:33.427239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:18.964 [2024-04-19 04:08:33.428615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a22f70 (9): Bad file descriptor 00:17:18.964 [2024-04-19 04:08:33.429613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.964 [2024-04-19 04:08:33.429623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:18.964 [2024-04-19 04:08:33.429629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.964 request: 00:17:18.964 { 00:17:18.964 "name": "TLSTEST", 00:17:18.964 "trtype": "tcp", 00:17:18.964 "traddr": "10.0.0.2", 00:17:18.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.964 "adrfam": "ipv4", 00:17:18.964 "trsvcid": "4420", 00:17:18.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.964 "method": "bdev_nvme_attach_controller", 00:17:18.964 "req_id": 1 00:17:18.964 } 00:17:18.964 Got JSON-RPC error response 00:17:18.964 response: 00:17:18.964 { 00:17:18.964 "code": -32602, 00:17:18.964 "message": "Invalid parameters" 00:17:18.964 } 00:17:18.964 04:08:33 -- target/tls.sh@36 -- # killprocess 3837406 00:17:18.964 04:08:33 -- common/autotest_common.sh@936 -- # '[' -z 3837406 ']' 00:17:18.964 04:08:33 -- common/autotest_common.sh@940 -- # kill -0 3837406 00:17:18.964 04:08:33 -- common/autotest_common.sh@941 -- # uname 00:17:18.964 04:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.964 04:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3837406 00:17:19.223 04:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.223 04:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.223 04:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3837406' 00:17:19.223 killing process with pid 3837406 00:17:19.223 04:08:33 -- common/autotest_common.sh@955 -- # kill 3837406 00:17:19.223 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.223 00:17:19.223 Latency(us) 00:17:19.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.223 =================================================================================================================== 00:17:19.223 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.224 04:08:33 -- common/autotest_common.sh@960 -- # wait 3837406 00:17:19.224 04:08:33 -- target/tls.sh@37 -- # return 1 00:17:19.224 04:08:33 -- common/autotest_common.sh@641 -- # es=1 00:17:19.224 04:08:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:19.224 04:08:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:19.224 04:08:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:19.224 04:08:33 -- target/tls.sh@158 -- # killprocess 3832241 00:17:19.224 04:08:33 -- common/autotest_common.sh@936 -- # '[' -z 3832241 ']' 00:17:19.224 04:08:33 -- common/autotest_common.sh@940 -- # kill -0 3832241 00:17:19.224 04:08:33 -- common/autotest_common.sh@941 -- # uname 00:17:19.224 04:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.224 04:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3832241 00:17:19.224 04:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:19.224 04:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:19.224 04:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3832241' 00:17:19.224 killing process with pid 3832241 00:17:19.224 04:08:33 -- common/autotest_common.sh@955 -- # kill 3832241 00:17:19.224 [2024-04-19 04:08:33.745190] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:19.224 04:08:33 -- common/autotest_common.sh@960 -- # wait 3832241 00:17:19.482 04:08:33 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:19.482 04:08:33 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:19.482 04:08:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:19.482 04:08:33 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:19.482 04:08:33 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:19.482 04:08:33 -- nvmf/common.sh@693 -- # digest=2 00:17:19.482 04:08:33 -- nvmf/common.sh@694 -- # python - 00:17:19.740 04:08:34 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:19.740 04:08:34 -- target/tls.sh@160 -- # mktemp 00:17:19.740 04:08:34 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.p2vZaM4iuw 00:17:19.740 04:08:34 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:19.740 04:08:34 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.p2vZaM4iuw 00:17:19.740 04:08:34 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:19.740 04:08:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:19.740 04:08:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:19.740 04:08:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.740 04:08:34 -- nvmf/common.sh@470 -- # nvmfpid=3837678 00:17:19.740 04:08:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:19.740 04:08:34 -- nvmf/common.sh@471 -- # waitforlisten 3837678 00:17:19.740 04:08:34 -- common/autotest_common.sh@817 -- # '[' -z 3837678 ']' 00:17:19.740 04:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.740 04:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:19.741 04:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.741 04:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:19.741 04:08:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 [2024-04-19 04:08:34.087293] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:19.741 [2024-04-19 04:08:34.087360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.741 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.741 [2024-04-19 04:08:34.166569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.741 [2024-04-19 04:08:34.247121] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.741 [2024-04-19 04:08:34.247171] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.741 [2024-04-19 04:08:34.247182] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.741 [2024-04-19 04:08:34.247196] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.741 [2024-04-19 04:08:34.247203] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.741 [2024-04-19 04:08:34.247224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.677 04:08:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.677 04:08:34 -- common/autotest_common.sh@850 -- # return 0 00:17:20.677 04:08:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:20.677 04:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:20.677 04:08:34 -- common/autotest_common.sh@10 -- # set +x 00:17:20.677 04:08:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.677 04:08:34 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:20.677 04:08:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.p2vZaM4iuw 00:17:20.677 04:08:34 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:20.677 [2024-04-19 04:08:35.104792] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.677 04:08:35 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:21.041 04:08:35 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:21.041 [2024-04-19 04:08:35.417603] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:21.041 [2024-04-19 04:08:35.417823] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.041 04:08:35 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:21.309 malloc0 00:17:21.309 04:08:35 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.309 04:08:35 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:21.568 [2024-04-19 04:08:35.876084] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:21.568 04:08:35 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2vZaM4iuw 00:17:21.568 04:08:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.568 04:08:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:21.568 04:08:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.568 04:08:35 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p2vZaM4iuw' 00:17:21.568 04:08:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.568 04:08:35 -- target/tls.sh@28 -- # bdevperf_pid=3837975 00:17:21.568 04:08:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.568 04:08:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.568 04:08:35 -- target/tls.sh@31 -- # waitforlisten 3837975 /var/tmp/bdevperf.sock 00:17:21.568 04:08:35 -- common/autotest_common.sh@817 -- # '[' -z 3837975 ']' 00:17:21.568 04:08:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.568 04:08:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.568 04:08:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.568 04:08:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.568 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:21.568 [2024-04-19 04:08:35.937161] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:21.568 [2024-04-19 04:08:35.937203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837975 ] 00:17:21.568 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.568 [2024-04-19 04:08:35.982393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.568 [2024-04-19 04:08:36.057269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.504 04:08:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.504 04:08:36 -- common/autotest_common.sh@850 -- # return 0 00:17:22.504 04:08:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:22.504 [2024-04-19 04:08:36.943561] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.504 [2024-04-19 04:08:36.943633] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:22.504 TLSTESTn1 00:17:22.763 04:08:37 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.763 Running I/O for 10 seconds... 00:17:32.738 00:17:32.738 Latency(us) 00:17:32.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.738 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.738 Verification LBA range: start 0x0 length 0x2000 00:17:32.738 TLSTESTn1 : 10.02 4642.37 18.13 0.00 0.00 27530.04 4498.15 41466.41 00:17:32.738 =================================================================================================================== 00:17:32.738 Total : 4642.37 18.13 0.00 0.00 27530.04 4498.15 41466.41 00:17:32.738 0 00:17:32.738 04:08:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.738 04:08:47 -- target/tls.sh@45 -- # killprocess 3837975 00:17:32.738 04:08:47 -- common/autotest_common.sh@936 -- # '[' -z 3837975 ']' 00:17:32.738 04:08:47 -- common/autotest_common.sh@940 -- # kill -0 3837975 00:17:32.738 04:08:47 -- common/autotest_common.sh@941 -- # uname 00:17:32.738 04:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.738 04:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3837975 00:17:32.738 04:08:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:32.738 04:08:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:32.738 04:08:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3837975' 00:17:32.738 killing process with pid 3837975 00:17:32.738 04:08:47 -- common/autotest_common.sh@955 -- # kill 3837975 00:17:32.738 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.738 00:17:32.738 Latency(us) 00:17:32.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.738 =================================================================================================================== 00:17:32.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.738 [2024-04-19 04:08:47.244727] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:32.738 04:08:47 -- common/autotest_common.sh@960 -- # wait 3837975 00:17:32.997 04:08:47 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.p2vZaM4iuw 00:17:32.997 04:08:47 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2vZaM4iuw 00:17:32.997 04:08:47 -- common/autotest_common.sh@638 -- # local es=0 00:17:32.997 04:08:47 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2vZaM4iuw 00:17:32.997 04:08:47 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:32.997 04:08:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:32.997 04:08:47 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:32.997 04:08:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:32.997 04:08:47 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p2vZaM4iuw 00:17:32.997 04:08:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.997 04:08:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:32.997 04:08:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.997 04:08:47 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.p2vZaM4iuw' 00:17:32.997 04:08:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.997 04:08:47 -- target/tls.sh@28 -- # bdevperf_pid=3840076 00:17:32.997 04:08:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.997 04:08:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.997 04:08:47 -- target/tls.sh@31 -- # waitforlisten 3840076 /var/tmp/bdevperf.sock 00:17:32.997 04:08:47 -- common/autotest_common.sh@817 -- # '[' -z 3840076 ']' 00:17:32.997 04:08:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.997 04:08:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:32.997 04:08:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.997 04:08:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:32.997 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:32.997 [2024-04-19 04:08:47.490070] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:32.997 [2024-04-19 04:08:47.490119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840076 ] 00:17:32.998 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.269 [2024-04-19 04:08:47.535670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.269 [2024-04-19 04:08:47.600857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.269 04:08:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.269 04:08:47 -- common/autotest_common.sh@850 -- # return 0 00:17:33.269 04:08:47 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:33.533 [2024-04-19 04:08:47.927645] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.533 [2024-04-19 04:08:47.927685] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:33.533 [2024-04-19 04:08:47.927690] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.p2vZaM4iuw 00:17:33.533 request: 00:17:33.533 { 00:17:33.533 "name": "TLSTEST", 00:17:33.533 "trtype": "tcp", 00:17:33.533 "traddr": "10.0.0.2", 00:17:33.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.533 "adrfam": "ipv4", 00:17:33.533 "trsvcid": "4420", 00:17:33.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.533 "psk": "/tmp/tmp.p2vZaM4iuw", 00:17:33.533 "method": "bdev_nvme_attach_controller", 00:17:33.533 "req_id": 1 00:17:33.533 } 00:17:33.534 Got JSON-RPC error response 00:17:33.534 response: 00:17:33.534 { 00:17:33.534 "code": -1, 00:17:33.534 "message": "Operation not permitted" 00:17:33.534 } 00:17:33.534 04:08:47 -- target/tls.sh@36 -- # killprocess 3840076 00:17:33.534 04:08:47 -- common/autotest_common.sh@936 -- # '[' -z 3840076 ']' 00:17:33.534 04:08:47 -- common/autotest_common.sh@940 -- # kill -0 3840076 00:17:33.534 04:08:47 -- common/autotest_common.sh@941 -- # uname 00:17:33.534 04:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.534 04:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840076 00:17:33.534 04:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:33.534 04:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:33.534 04:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840076' 00:17:33.534 killing process with pid 3840076 00:17:33.534 04:08:48 -- common/autotest_common.sh@955 -- # kill 3840076 00:17:33.534 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.534 00:17:33.534 Latency(us) 00:17:33.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.534 =================================================================================================================== 00:17:33.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.534 04:08:48 -- common/autotest_common.sh@960 -- # wait 3840076 00:17:33.793 04:08:48 -- target/tls.sh@37 -- # return 1 00:17:33.793 04:08:48 -- common/autotest_common.sh@641 -- # es=1 00:17:33.793 04:08:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:33.793 04:08:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:33.793 04:08:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:33.793 04:08:48 -- target/tls.sh@174 -- # killprocess 3837678 00:17:33.793 04:08:48 -- common/autotest_common.sh@936 -- # '[' -z 3837678 ']' 00:17:33.793 04:08:48 -- common/autotest_common.sh@940 -- # kill -0 3837678 00:17:33.793 04:08:48 -- common/autotest_common.sh@941 -- # uname 00:17:33.793 04:08:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.793 04:08:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3837678 00:17:33.793 04:08:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:33.793 04:08:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:33.793 04:08:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3837678' 00:17:33.793 killing process with pid 3837678 00:17:33.793 04:08:48 -- common/autotest_common.sh@955 -- # kill 3837678 00:17:33.793 [2024-04-19 04:08:48.245074] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:33.793 04:08:48 -- common/autotest_common.sh@960 -- # wait 3837678 00:17:34.052 04:08:48 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:34.052 04:08:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:34.052 04:08:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:34.052 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:34.052 04:08:48 -- nvmf/common.sh@470 -- # nvmfpid=3840344 00:17:34.052 04:08:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.052 04:08:48 -- nvmf/common.sh@471 -- # waitforlisten 3840344 00:17:34.052 04:08:48 -- common/autotest_common.sh@817 -- # '[' -z 3840344 ']' 00:17:34.052 04:08:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.052 04:08:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:34.052 04:08:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.052 04:08:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:34.052 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:34.052 [2024-04-19 04:08:48.539311] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:34.052 [2024-04-19 04:08:48.539383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.052 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.310 [2024-04-19 04:08:48.618550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.310 [2024-04-19 04:08:48.700067] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.310 [2024-04-19 04:08:48.700113] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.310 [2024-04-19 04:08:48.700125] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.310 [2024-04-19 04:08:48.700135] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.311 [2024-04-19 04:08:48.700142] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.311 [2024-04-19 04:08:48.700162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.311 04:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.311 04:08:48 -- common/autotest_common.sh@850 -- # return 0 00:17:34.311 04:08:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:34.311 04:08:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:34.311 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:34.311 04:08:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.311 04:08:48 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:34.311 04:08:48 -- common/autotest_common.sh@638 -- # local es=0 00:17:34.311 04:08:48 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:34.311 04:08:48 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:34.311 04:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.311 04:08:48 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:34.311 04:08:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.311 04:08:48 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:34.311 04:08:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.p2vZaM4iuw 00:17:34.311 04:08:48 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.569 [2024-04-19 04:08:49.055385] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.569 04:08:49 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:34.827 04:08:49 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.085 [2024-04-19 04:08:49.448422] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.085 [2024-04-19 04:08:49.448641] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.085 04:08:49 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.343 malloc0 00:17:35.343 04:08:49 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.601 04:08:49 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:35.601 [2024-04-19 04:08:50.019247] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:35.602 [2024-04-19 04:08:50.019281] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:35.602 [2024-04-19 04:08:50.019309] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:35.602 request: 00:17:35.602 { 00:17:35.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.602 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.602 "psk": "/tmp/tmp.p2vZaM4iuw", 00:17:35.602 "method": "nvmf_subsystem_add_host", 00:17:35.602 "req_id": 1 00:17:35.602 } 00:17:35.602 Got JSON-RPC error response 00:17:35.602 response: 00:17:35.602 { 00:17:35.602 "code": -32603, 00:17:35.602 "message": "Internal error" 00:17:35.602 } 00:17:35.602 04:08:50 -- common/autotest_common.sh@641 -- # es=1 00:17:35.602 04:08:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:35.602 04:08:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:35.602 04:08:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:35.602 04:08:50 -- target/tls.sh@180 -- # killprocess 3840344 00:17:35.602 04:08:50 -- common/autotest_common.sh@936 -- # '[' -z 3840344 ']' 00:17:35.602 04:08:50 -- common/autotest_common.sh@940 -- # kill -0 3840344 00:17:35.602 04:08:50 -- common/autotest_common.sh@941 -- # uname 00:17:35.602 04:08:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.602 04:08:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840344 00:17:35.602 04:08:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:35.602 04:08:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:35.602 04:08:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840344' 00:17:35.602 killing process with pid 3840344 00:17:35.602 04:08:50 -- common/autotest_common.sh@955 -- # kill 3840344 00:17:35.602 04:08:50 -- common/autotest_common.sh@960 -- # wait 3840344 00:17:35.861 04:08:50 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.p2vZaM4iuw 00:17:35.861 04:08:50 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:35.861 04:08:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.861 04:08:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.861 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.861 04:08:50 -- nvmf/common.sh@470 -- # nvmfpid=3840645 00:17:35.861 04:08:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:35.861 04:08:50 -- nvmf/common.sh@471 -- # waitforlisten 3840645 00:17:35.861 04:08:50 -- common/autotest_common.sh@817 -- # '[' -z 3840645 ']' 00:17:35.861 04:08:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.861 04:08:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.861 04:08:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.861 04:08:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.861 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.120 [2024-04-19 04:08:50.401449] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:36.120 [2024-04-19 04:08:50.401512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.120 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.120 [2024-04-19 04:08:50.481686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.120 [2024-04-19 04:08:50.568801] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.120 [2024-04-19 04:08:50.568846] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.120 [2024-04-19 04:08:50.568856] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.120 [2024-04-19 04:08:50.568865] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.120 [2024-04-19 04:08:50.568872] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.120 [2024-04-19 04:08:50.568899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.378 04:08:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.378 04:08:50 -- common/autotest_common.sh@850 -- # return 0 00:17:36.378 04:08:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.378 04:08:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.378 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.378 04:08:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.378 04:08:50 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:36.378 04:08:50 -- target/tls.sh@49 -- # local key=/tmp/tmp.p2vZaM4iuw 00:17:36.378 04:08:50 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:36.637 [2024-04-19 04:08:50.921946] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.637 04:08:50 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:36.637 04:08:51 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:36.894 [2024-04-19 04:08:51.327010] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.894 [2024-04-19 04:08:51.327227] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.894 04:08:51 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:37.152 malloc0 00:17:37.152 04:08:51 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:37.409 04:08:51 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:37.409 [2024-04-19 04:08:51.873792] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:37.409 04:08:51 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.409 04:08:51 -- target/tls.sh@188 -- # bdevperf_pid=3840927 00:17:37.409 04:08:51 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.409 04:08:51 -- target/tls.sh@191 -- # waitforlisten 3840927 /var/tmp/bdevperf.sock 00:17:37.409 04:08:51 -- common/autotest_common.sh@817 -- # '[' -z 3840927 ']' 00:17:37.410 04:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.410 04:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.410 04:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.410 04:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.410 04:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:37.410 [2024-04-19 04:08:51.915183] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:37.410 [2024-04-19 04:08:51.915224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840927 ] 00:17:37.410 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.668 [2024-04-19 04:08:51.960303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.668 [2024-04-19 04:08:52.028364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.668 04:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.668 04:08:52 -- common/autotest_common.sh@850 -- # return 0 00:17:37.668 04:08:52 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:37.926 [2024-04-19 04:08:52.255804] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.926 [2024-04-19 04:08:52.255867] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.926 TLSTESTn1 00:17:37.926 04:08:52 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:38.184 04:08:52 -- target/tls.sh@196 -- # tgtconf='{ 00:17:38.184 "subsystems": [ 00:17:38.184 { 00:17:38.184 "subsystem": "keyring", 00:17:38.184 "config": [] 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "subsystem": "iobuf", 00:17:38.184 "config": [ 00:17:38.184 { 00:17:38.184 "method": "iobuf_set_options", 00:17:38.184 "params": { 00:17:38.184 "small_pool_count": 8192, 00:17:38.184 "large_pool_count": 1024, 00:17:38.184 "small_bufsize": 8192, 00:17:38.184 "large_bufsize": 135168 00:17:38.184 } 00:17:38.184 } 00:17:38.184 ] 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "subsystem": "sock", 00:17:38.184 "config": [ 00:17:38.184 { 00:17:38.184 "method": "sock_impl_set_options", 00:17:38.184 "params": { 00:17:38.184 "impl_name": "posix", 00:17:38.184 "recv_buf_size": 2097152, 00:17:38.184 "send_buf_size": 2097152, 00:17:38.184 "enable_recv_pipe": true, 00:17:38.184 "enable_quickack": false, 00:17:38.184 "enable_placement_id": 0, 00:17:38.184 "enable_zerocopy_send_server": true, 00:17:38.184 "enable_zerocopy_send_client": false, 00:17:38.184 "zerocopy_threshold": 0, 00:17:38.184 "tls_version": 0, 00:17:38.184 "enable_ktls": false 00:17:38.184 } 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "method": "sock_impl_set_options", 00:17:38.184 "params": { 00:17:38.184 "impl_name": "ssl", 00:17:38.184 "recv_buf_size": 4096, 00:17:38.184 "send_buf_size": 4096, 00:17:38.184 "enable_recv_pipe": true, 00:17:38.184 "enable_quickack": false, 00:17:38.184 "enable_placement_id": 0, 00:17:38.184 "enable_zerocopy_send_server": true, 00:17:38.184 "enable_zerocopy_send_client": false, 00:17:38.184 "zerocopy_threshold": 0, 00:17:38.184 "tls_version": 0, 00:17:38.184 "enable_ktls": false 00:17:38.184 } 00:17:38.184 } 00:17:38.184 ] 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "subsystem": "vmd", 00:17:38.184 "config": [] 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "subsystem": "accel", 00:17:38.184 "config": [ 00:17:38.184 { 00:17:38.184 "method": "accel_set_options", 00:17:38.184 "params": { 00:17:38.184 "small_cache_size": 128, 00:17:38.184 "large_cache_size": 16, 00:17:38.184 "task_count": 2048, 00:17:38.184 "sequence_count": 2048, 00:17:38.184 "buf_count": 2048 00:17:38.184 } 00:17:38.184 } 00:17:38.184 ] 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "subsystem": "bdev", 00:17:38.184 "config": [ 00:17:38.184 { 00:17:38.184 "method": "bdev_set_options", 00:17:38.184 "params": { 00:17:38.184 "bdev_io_pool_size": 65535, 00:17:38.184 "bdev_io_cache_size": 256, 00:17:38.184 "bdev_auto_examine": true, 00:17:38.184 "iobuf_small_cache_size": 128, 00:17:38.184 "iobuf_large_cache_size": 16 00:17:38.184 } 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "method": "bdev_raid_set_options", 00:17:38.184 "params": { 00:17:38.184 "process_window_size_kb": 1024 00:17:38.184 } 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "method": "bdev_iscsi_set_options", 00:17:38.184 "params": { 00:17:38.184 "timeout_sec": 30 00:17:38.184 } 00:17:38.184 }, 00:17:38.184 { 00:17:38.184 "method": "bdev_nvme_set_options", 00:17:38.184 "params": { 00:17:38.184 "action_on_timeout": "none", 00:17:38.184 "timeout_us": 0, 00:17:38.184 "timeout_admin_us": 0, 00:17:38.184 "keep_alive_timeout_ms": 10000, 00:17:38.184 "arbitration_burst": 0, 00:17:38.184 "low_priority_weight": 0, 00:17:38.184 "medium_priority_weight": 0, 00:17:38.184 "high_priority_weight": 0, 00:17:38.184 "nvme_adminq_poll_period_us": 10000, 00:17:38.184 "nvme_ioq_poll_period_us": 0, 00:17:38.184 "io_queue_requests": 0, 00:17:38.185 "delay_cmd_submit": true, 00:17:38.185 "transport_retry_count": 4, 00:17:38.185 "bdev_retry_count": 3, 00:17:38.185 "transport_ack_timeout": 0, 00:17:38.185 "ctrlr_loss_timeout_sec": 0, 00:17:38.185 "reconnect_delay_sec": 0, 00:17:38.185 "fast_io_fail_timeout_sec": 0, 00:17:38.185 "disable_auto_failback": false, 00:17:38.185 "generate_uuids": false, 00:17:38.185 "transport_tos": 0, 00:17:38.185 "nvme_error_stat": false, 00:17:38.185 "rdma_srq_size": 0, 00:17:38.185 "io_path_stat": false, 00:17:38.185 "allow_accel_sequence": false, 00:17:38.185 "rdma_max_cq_size": 0, 00:17:38.185 "rdma_cm_event_timeout_ms": 0, 00:17:38.185 "dhchap_digests": [ 00:17:38.185 "sha256", 00:17:38.185 "sha384", 00:17:38.185 "sha512" 00:17:38.185 ], 00:17:38.185 "dhchap_dhgroups": [ 00:17:38.185 "null", 00:17:38.185 "ffdhe2048", 00:17:38.185 "ffdhe3072", 00:17:38.185 "ffdhe4096", 00:17:38.185 "ffdhe6144", 00:17:38.185 "ffdhe8192" 00:17:38.185 ] 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "bdev_nvme_set_hotplug", 00:17:38.185 "params": { 00:17:38.185 "period_us": 100000, 00:17:38.185 "enable": false 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "bdev_malloc_create", 00:17:38.185 "params": { 00:17:38.185 "name": "malloc0", 00:17:38.185 "num_blocks": 8192, 00:17:38.185 "block_size": 4096, 00:17:38.185 "physical_block_size": 4096, 00:17:38.185 "uuid": "15859a72-12b8-4e40-862c-2787862822a6", 00:17:38.185 "optimal_io_boundary": 0 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "bdev_wait_for_examine" 00:17:38.185 } 00:17:38.185 ] 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "subsystem": "nbd", 00:17:38.185 "config": [] 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "subsystem": "scheduler", 00:17:38.185 "config": [ 00:17:38.185 { 00:17:38.185 "method": "framework_set_scheduler", 00:17:38.185 "params": { 00:17:38.185 "name": "static" 00:17:38.185 } 00:17:38.185 } 00:17:38.185 ] 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "subsystem": "nvmf", 00:17:38.185 "config": [ 00:17:38.185 { 00:17:38.185 "method": "nvmf_set_config", 00:17:38.185 "params": { 00:17:38.185 "discovery_filter": "match_any", 00:17:38.185 "admin_cmd_passthru": { 00:17:38.185 "identify_ctrlr": false 00:17:38.185 } 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_set_max_subsystems", 00:17:38.185 "params": { 00:17:38.185 "max_subsystems": 1024 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_set_crdt", 00:17:38.185 "params": { 00:17:38.185 "crdt1": 0, 00:17:38.185 "crdt2": 0, 00:17:38.185 "crdt3": 0 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_create_transport", 00:17:38.185 "params": { 00:17:38.185 "trtype": "TCP", 00:17:38.185 "max_queue_depth": 128, 00:17:38.185 "max_io_qpairs_per_ctrlr": 127, 00:17:38.185 "in_capsule_data_size": 4096, 00:17:38.185 "max_io_size": 131072, 00:17:38.185 "io_unit_size": 131072, 00:17:38.185 "max_aq_depth": 128, 00:17:38.185 "num_shared_buffers": 511, 00:17:38.185 "buf_cache_size": 4294967295, 00:17:38.185 "dif_insert_or_strip": false, 00:17:38.185 "zcopy": false, 00:17:38.185 "c2h_success": false, 00:17:38.185 "sock_priority": 0, 00:17:38.185 "abort_timeout_sec": 1, 00:17:38.185 "ack_timeout": 0 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_create_subsystem", 00:17:38.185 "params": { 00:17:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.185 "allow_any_host": false, 00:17:38.185 "serial_number": "SPDK00000000000001", 00:17:38.185 "model_number": "SPDK bdev Controller", 00:17:38.185 "max_namespaces": 10, 00:17:38.185 "min_cntlid": 1, 00:17:38.185 "max_cntlid": 65519, 00:17:38.185 "ana_reporting": false 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_subsystem_add_host", 00:17:38.185 "params": { 00:17:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.185 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.185 "psk": "/tmp/tmp.p2vZaM4iuw" 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_subsystem_add_ns", 00:17:38.185 "params": { 00:17:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.185 "namespace": { 00:17:38.185 "nsid": 1, 00:17:38.185 "bdev_name": "malloc0", 00:17:38.185 "nguid": "15859A7212B84E40862C2787862822A6", 00:17:38.185 "uuid": "15859a72-12b8-4e40-862c-2787862822a6", 00:17:38.185 "no_auto_visible": false 00:17:38.185 } 00:17:38.185 } 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "method": "nvmf_subsystem_add_listener", 00:17:38.185 "params": { 00:17:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.185 "listen_address": { 00:17:38.185 "trtype": "TCP", 00:17:38.185 "adrfam": "IPv4", 00:17:38.185 "traddr": "10.0.0.2", 00:17:38.185 "trsvcid": "4420" 00:17:38.185 }, 00:17:38.185 "secure_channel": true 00:17:38.185 } 00:17:38.185 } 00:17:38.185 ] 00:17:38.185 } 00:17:38.185 ] 00:17:38.185 }' 00:17:38.185 04:08:52 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:38.444 04:08:52 -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:38.444 "subsystems": [ 00:17:38.444 { 00:17:38.444 "subsystem": "keyring", 00:17:38.444 "config": [] 00:17:38.444 }, 00:17:38.444 { 00:17:38.444 "subsystem": "iobuf", 00:17:38.444 "config": [ 00:17:38.444 { 00:17:38.444 "method": "iobuf_set_options", 00:17:38.444 "params": { 00:17:38.444 "small_pool_count": 8192, 00:17:38.444 "large_pool_count": 1024, 00:17:38.445 "small_bufsize": 8192, 00:17:38.445 "large_bufsize": 135168 00:17:38.445 } 00:17:38.445 } 00:17:38.445 ] 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "subsystem": "sock", 00:17:38.445 "config": [ 00:17:38.445 { 00:17:38.445 "method": "sock_impl_set_options", 00:17:38.445 "params": { 00:17:38.445 "impl_name": "posix", 00:17:38.445 "recv_buf_size": 2097152, 00:17:38.445 "send_buf_size": 2097152, 00:17:38.445 "enable_recv_pipe": true, 00:17:38.445 "enable_quickack": false, 00:17:38.445 "enable_placement_id": 0, 00:17:38.445 "enable_zerocopy_send_server": true, 00:17:38.445 "enable_zerocopy_send_client": false, 00:17:38.445 "zerocopy_threshold": 0, 00:17:38.445 "tls_version": 0, 00:17:38.445 "enable_ktls": false 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "sock_impl_set_options", 00:17:38.445 "params": { 00:17:38.445 "impl_name": "ssl", 00:17:38.445 "recv_buf_size": 4096, 00:17:38.445 "send_buf_size": 4096, 00:17:38.445 "enable_recv_pipe": true, 00:17:38.445 "enable_quickack": false, 00:17:38.445 "enable_placement_id": 0, 00:17:38.445 "enable_zerocopy_send_server": true, 00:17:38.445 "enable_zerocopy_send_client": false, 00:17:38.445 "zerocopy_threshold": 0, 00:17:38.445 "tls_version": 0, 00:17:38.445 "enable_ktls": false 00:17:38.445 } 00:17:38.445 } 00:17:38.445 ] 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "subsystem": "vmd", 00:17:38.445 "config": [] 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "subsystem": "accel", 00:17:38.445 "config": [ 00:17:38.445 { 00:17:38.445 "method": "accel_set_options", 00:17:38.445 "params": { 00:17:38.445 "small_cache_size": 128, 00:17:38.445 "large_cache_size": 16, 00:17:38.445 "task_count": 2048, 00:17:38.445 "sequence_count": 2048, 00:17:38.445 "buf_count": 2048 00:17:38.445 } 00:17:38.445 } 00:17:38.445 ] 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "subsystem": "bdev", 00:17:38.445 "config": [ 00:17:38.445 { 00:17:38.445 "method": "bdev_set_options", 00:17:38.445 "params": { 00:17:38.445 "bdev_io_pool_size": 65535, 00:17:38.445 "bdev_io_cache_size": 256, 00:17:38.445 "bdev_auto_examine": true, 00:17:38.445 "iobuf_small_cache_size": 128, 00:17:38.445 "iobuf_large_cache_size": 16 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_raid_set_options", 00:17:38.445 "params": { 00:17:38.445 "process_window_size_kb": 1024 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_iscsi_set_options", 00:17:38.445 "params": { 00:17:38.445 "timeout_sec": 30 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_nvme_set_options", 00:17:38.445 "params": { 00:17:38.445 "action_on_timeout": "none", 00:17:38.445 "timeout_us": 0, 00:17:38.445 "timeout_admin_us": 0, 00:17:38.445 "keep_alive_timeout_ms": 10000, 00:17:38.445 "arbitration_burst": 0, 00:17:38.445 "low_priority_weight": 0, 00:17:38.445 "medium_priority_weight": 0, 00:17:38.445 "high_priority_weight": 0, 00:17:38.445 "nvme_adminq_poll_period_us": 10000, 00:17:38.445 "nvme_ioq_poll_period_us": 0, 00:17:38.445 "io_queue_requests": 512, 00:17:38.445 "delay_cmd_submit": true, 00:17:38.445 "transport_retry_count": 4, 00:17:38.445 "bdev_retry_count": 3, 00:17:38.445 "transport_ack_timeout": 0, 00:17:38.445 "ctrlr_loss_timeout_sec": 0, 00:17:38.445 "reconnect_delay_sec": 0, 00:17:38.445 "fast_io_fail_timeout_sec": 0, 00:17:38.445 "disable_auto_failback": false, 00:17:38.445 "generate_uuids": false, 00:17:38.445 "transport_tos": 0, 00:17:38.445 "nvme_error_stat": false, 00:17:38.445 "rdma_srq_size": 0, 00:17:38.445 "io_path_stat": false, 00:17:38.445 "allow_accel_sequence": false, 00:17:38.445 "rdma_max_cq_size": 0, 00:17:38.445 "rdma_cm_event_timeout_ms": 0, 00:17:38.445 "dhchap_digests": [ 00:17:38.445 "sha256", 00:17:38.445 "sha384", 00:17:38.445 "sha512" 00:17:38.445 ], 00:17:38.445 "dhchap_dhgroups": [ 00:17:38.445 "null", 00:17:38.445 "ffdhe2048", 00:17:38.445 "ffdhe3072", 00:17:38.445 "ffdhe4096", 00:17:38.445 "ffdhe6144", 00:17:38.445 "ffdhe8192" 00:17:38.445 ] 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_nvme_attach_controller", 00:17:38.445 "params": { 00:17:38.445 "name": "TLSTEST", 00:17:38.445 "trtype": "TCP", 00:17:38.445 "adrfam": "IPv4", 00:17:38.445 "traddr": "10.0.0.2", 00:17:38.445 "trsvcid": "4420", 00:17:38.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.445 "prchk_reftag": false, 00:17:38.445 "prchk_guard": false, 00:17:38.445 "ctrlr_loss_timeout_sec": 0, 00:17:38.445 "reconnect_delay_sec": 0, 00:17:38.445 "fast_io_fail_timeout_sec": 0, 00:17:38.445 "psk": "/tmp/tmp.p2vZaM4iuw", 00:17:38.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.445 "hdgst": false, 00:17:38.445 "ddgst": false 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_nvme_set_hotplug", 00:17:38.445 "params": { 00:17:38.445 "period_us": 100000, 00:17:38.445 "enable": false 00:17:38.445 } 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "method": "bdev_wait_for_examine" 00:17:38.445 } 00:17:38.445 ] 00:17:38.445 }, 00:17:38.445 { 00:17:38.445 "subsystem": "nbd", 00:17:38.445 "config": [] 00:17:38.445 } 00:17:38.445 ] 00:17:38.445 }' 00:17:38.445 04:08:52 -- target/tls.sh@199 -- # killprocess 3840927 00:17:38.445 04:08:52 -- common/autotest_common.sh@936 -- # '[' -z 3840927 ']' 00:17:38.445 04:08:52 -- common/autotest_common.sh@940 -- # kill -0 3840927 00:17:38.445 04:08:52 -- common/autotest_common.sh@941 -- # uname 00:17:38.445 04:08:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.445 04:08:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840927 00:17:38.445 04:08:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.445 04:08:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.445 04:08:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840927' 00:17:38.445 killing process with pid 3840927 00:17:38.445 04:08:52 -- common/autotest_common.sh@955 -- # kill 3840927 00:17:38.445 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.445 00:17:38.445 Latency(us) 00:17:38.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.445 =================================================================================================================== 00:17:38.445 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.445 [2024-04-19 04:08:52.968357] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.445 04:08:52 -- common/autotest_common.sh@960 -- # wait 3840927 00:17:38.703 04:08:53 -- target/tls.sh@200 -- # killprocess 3840645 00:17:38.703 04:08:53 -- common/autotest_common.sh@936 -- # '[' -z 3840645 ']' 00:17:38.703 04:08:53 -- common/autotest_common.sh@940 -- # kill -0 3840645 00:17:38.703 04:08:53 -- common/autotest_common.sh@941 -- # uname 00:17:38.703 04:08:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.703 04:08:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3840645 00:17:38.703 04:08:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.703 04:08:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.703 04:08:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3840645' 00:17:38.703 killing process with pid 3840645 00:17:38.703 04:08:53 -- common/autotest_common.sh@955 -- # kill 3840645 00:17:38.703 [2024-04-19 04:08:53.214109] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:38.703 04:08:53 -- common/autotest_common.sh@960 -- # wait 3840645 00:17:38.961 04:08:53 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:38.961 04:08:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:38.961 04:08:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.961 04:08:53 -- target/tls.sh@203 -- # echo '{ 00:17:38.961 "subsystems": [ 00:17:38.961 { 00:17:38.961 "subsystem": "keyring", 00:17:38.961 "config": [] 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "subsystem": "iobuf", 00:17:38.961 "config": [ 00:17:38.961 { 00:17:38.961 "method": "iobuf_set_options", 00:17:38.961 "params": { 00:17:38.961 "small_pool_count": 8192, 00:17:38.961 "large_pool_count": 1024, 00:17:38.961 "small_bufsize": 8192, 00:17:38.961 "large_bufsize": 135168 00:17:38.961 } 00:17:38.961 } 00:17:38.961 ] 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "subsystem": "sock", 00:17:38.961 "config": [ 00:17:38.961 { 00:17:38.961 "method": "sock_impl_set_options", 00:17:38.961 "params": { 00:17:38.961 "impl_name": "posix", 00:17:38.961 "recv_buf_size": 2097152, 00:17:38.961 "send_buf_size": 2097152, 00:17:38.961 "enable_recv_pipe": true, 00:17:38.961 "enable_quickack": false, 00:17:38.961 "enable_placement_id": 0, 00:17:38.961 "enable_zerocopy_send_server": true, 00:17:38.961 "enable_zerocopy_send_client": false, 00:17:38.961 "zerocopy_threshold": 0, 00:17:38.961 "tls_version": 0, 00:17:38.961 "enable_ktls": false 00:17:38.961 } 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "method": "sock_impl_set_options", 00:17:38.961 "params": { 00:17:38.961 "impl_name": "ssl", 00:17:38.961 "recv_buf_size": 4096, 00:17:38.961 "send_buf_size": 4096, 00:17:38.961 "enable_recv_pipe": true, 00:17:38.961 "enable_quickack": false, 00:17:38.961 "enable_placement_id": 0, 00:17:38.961 "enable_zerocopy_send_server": true, 00:17:38.961 "enable_zerocopy_send_client": false, 00:17:38.961 "zerocopy_threshold": 0, 00:17:38.961 "tls_version": 0, 00:17:38.961 "enable_ktls": false 00:17:38.961 } 00:17:38.961 } 00:17:38.961 ] 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "subsystem": "vmd", 00:17:38.961 "config": [] 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "subsystem": "accel", 00:17:38.961 "config": [ 00:17:38.961 { 00:17:38.961 "method": "accel_set_options", 00:17:38.961 "params": { 00:17:38.961 "small_cache_size": 128, 00:17:38.961 "large_cache_size": 16, 00:17:38.961 "task_count": 2048, 00:17:38.961 "sequence_count": 2048, 00:17:38.961 "buf_count": 2048 00:17:38.961 } 00:17:38.961 } 00:17:38.961 ] 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "subsystem": "bdev", 00:17:38.961 "config": [ 00:17:38.961 { 00:17:38.961 "method": "bdev_set_options", 00:17:38.961 "params": { 00:17:38.961 "bdev_io_pool_size": 65535, 00:17:38.961 "bdev_io_cache_size": 256, 00:17:38.961 "bdev_auto_examine": true, 00:17:38.961 "iobuf_small_cache_size": 128, 00:17:38.961 "iobuf_large_cache_size": 16 00:17:38.961 } 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "method": "bdev_raid_set_options", 00:17:38.961 "params": { 00:17:38.961 "process_window_size_kb": 1024 00:17:38.961 } 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "method": "bdev_iscsi_set_options", 00:17:38.961 "params": { 00:17:38.961 "timeout_sec": 30 00:17:38.961 } 00:17:38.961 }, 00:17:38.961 { 00:17:38.961 "method": "bdev_nvme_set_options", 00:17:38.961 "params": { 00:17:38.961 "action_on_timeout": "none", 00:17:38.961 "timeout_us": 0, 00:17:38.961 "timeout_admin_us": 0, 00:17:38.961 "keep_alive_timeout_ms": 10000, 00:17:38.961 "arbitration_burst": 0, 00:17:38.961 "low_priority_weight": 0, 00:17:38.961 "medium_priority_weight": 0, 00:17:38.961 "high_priority_weight": 0, 00:17:38.961 "nvme_adminq_poll_period_us": 10000, 00:17:38.961 "nvme_ioq_poll_period_us": 0, 00:17:38.961 "io_queue_requests": 0, 00:17:38.962 "delay_cmd_submit": true, 00:17:38.962 "transport_retry_count": 4, 00:17:38.962 "bdev_retry_count": 3, 00:17:38.962 "transport_ack_timeout": 0, 00:17:38.962 "ctrlr_loss_timeout_sec": 0, 00:17:38.962 "reconnect_delay_sec": 0, 00:17:38.962 "fast_io_fail_timeout_sec": 0, 00:17:38.962 "disable_auto_failback": false, 00:17:38.962 "generate_uuids": false, 00:17:38.962 "transport_tos": 0, 00:17:38.962 "nvme_error_stat": false, 00:17:38.962 "rdma_srq_size": 0, 00:17:38.962 "io_path_stat": false, 00:17:38.962 "allow_accel_sequence": false, 00:17:38.962 "rdma_max_cq_size": 0, 00:17:38.962 "rdma_cm_event_timeout_ms": 0, 00:17:38.962 "dhchap_digests": [ 00:17:38.962 "sha256", 00:17:38.962 "sha384", 00:17:38.962 "sha512" 00:17:38.962 ], 00:17:38.962 "dhchap_dhgroups": [ 00:17:38.962 "null", 00:17:38.962 "ffdhe2048", 00:17:38.962 "ffdhe3072", 00:17:38.962 "ffdhe4096", 00:17:38.962 "ffdhe6144", 00:17:38.962 "ffdhe8192" 00:17:38.962 ] 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "bdev_nvme_set_hotplug", 00:17:38.962 "params": { 00:17:38.962 "period_us": 100000, 00:17:38.962 "enable": false 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "bdev_malloc_create", 00:17:38.962 "params": { 00:17:38.962 "name": "malloc0", 00:17:38.962 "num_blocks": 8192, 00:17:38.962 "block_size": 4096, 00:17:38.962 "physical_block_size": 4096, 00:17:38.962 "uuid": "15859a72-12b8-4e40-862c-2787862822a6", 00:17:38.962 "optimal_io_boundary": 0 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "bdev_wait_for_examine" 00:17:38.962 } 00:17:38.962 ] 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "subsystem": "nbd", 00:17:38.962 "config": [] 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "subsystem": "scheduler", 00:17:38.962 "config": [ 00:17:38.962 { 00:17:38.962 "method": "framework_set_scheduler", 00:17:38.962 "params": { 00:17:38.962 "name": "static" 00:17:38.962 } 00:17:38.962 } 00:17:38.962 ] 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "subsystem": "nvmf", 00:17:38.962 "config": [ 00:17:38.962 { 00:17:38.962 "method": "nvmf_set_config", 00:17:38.962 "params": { 00:17:38.962 "discovery_filter": "match_any", 00:17:38.962 "admin_cmd_passthru": { 00:17:38.962 "identify_ctrlr": false 00:17:38.962 } 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_set_max_subsystems", 00:17:38.962 "params": { 00:17:38.962 "max_subsystems": 1024 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_set_crdt", 00:17:38.962 "params": { 00:17:38.962 "crdt1": 0, 00:17:38.962 "crdt2": 0, 00:17:38.962 "crdt3": 0 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_create_transport", 00:17:38.962 "params": { 00:17:38.962 "trtype": "TCP", 00:17:38.962 "max_queue_depth": 128, 00:17:38.962 "max_io_qpairs_per_ctrlr": 127, 00:17:38.962 "in_capsule_data_size": 4096, 00:17:38.962 "max_io_size": 131072, 00:17:38.962 "io_unit_size": 131072, 00:17:38.962 "max_aq_depth": 128, 00:17:38.962 "num_shared_buffers": 511, 00:17:38.962 "buf_cache_size": 4294967295, 00:17:38.962 "dif_insert_or_strip": false, 00:17:38.962 "zcopy": false, 00:17:38.962 "c2h_success": false, 00:17:38.962 "sock_priority": 0, 00:17:38.962 "abort_timeout_sec": 1, 00:17:38.962 "ack_timeout": 0 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_create_subsystem", 00:17:38.962 "params": { 00:17:38.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.962 "allow_any_host": false, 00:17:38.962 "serial_number": "SPDK00000000000001", 00:17:38.962 "model_number": "SPDK bdev Controller", 00:17:38.962 "max_namespaces": 10, 00:17:38.962 "min_cntlid": 1, 00:17:38.962 "max_cntlid": 65519, 00:17:38.962 "ana_reporting": false 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_subsystem_add_host", 00:17:38.962 "params": { 00:17:38.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.962 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.962 "psk": "/tmp/tmp.p2vZaM4iuw" 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_subsystem_add_ns", 00:17:38.962 "params": { 00:17:38.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.962 "namespace": { 00:17:38.962 "nsid": 1, 00:17:38.962 "bdev_name": "malloc0", 00:17:38.962 "nguid": "15859A7212B84E40862C2787862822A6", 00:17:38.962 "uuid": "15859a72-12b8-4e40-862c-2787862822a6", 00:17:38.962 "no_auto_visible": false 00:17:38.962 } 00:17:38.962 } 00:17:38.962 }, 00:17:38.962 { 00:17:38.962 "method": "nvmf_subsystem_add_listener", 00:17:38.962 "params": { 00:17:38.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.962 "listen_address": { 00:17:38.962 "trtype": "TCP", 00:17:38.962 "adrfam": "IPv4", 00:17:38.962 "traddr": "10.0.0.2", 00:17:38.962 "trsvcid": "4420" 00:17:38.962 }, 00:17:38.962 "secure_channel": true 00:17:38.962 } 00:17:38.962 } 00:17:38.962 ] 00:17:38.962 } 00:17:38.962 ] 00:17:38.962 }' 00:17:38.962 04:08:53 -- common/autotest_common.sh@10 -- # set +x 00:17:38.962 04:08:53 -- nvmf/common.sh@470 -- # nvmfpid=3841213 00:17:38.962 04:08:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:38.962 04:08:53 -- nvmf/common.sh@471 -- # waitforlisten 3841213 00:17:38.962 04:08:53 -- common/autotest_common.sh@817 -- # '[' -z 3841213 ']' 00:17:38.962 04:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.962 04:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.962 04:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.962 04:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.962 04:08:53 -- common/autotest_common.sh@10 -- # set +x 00:17:39.221 [2024-04-19 04:08:53.503160] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:39.221 [2024-04-19 04:08:53.503201] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.221 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.221 [2024-04-19 04:08:53.567179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.221 [2024-04-19 04:08:53.652857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.221 [2024-04-19 04:08:53.652900] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.221 [2024-04-19 04:08:53.652911] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.221 [2024-04-19 04:08:53.652919] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.221 [2024-04-19 04:08:53.652927] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.221 [2024-04-19 04:08:53.652995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.479 [2024-04-19 04:08:53.856829] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.479 [2024-04-19 04:08:53.872780] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:39.479 [2024-04-19 04:08:53.888833] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.479 [2024-04-19 04:08:53.897581] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.046 04:08:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.046 04:08:54 -- common/autotest_common.sh@850 -- # return 0 00:17:40.046 04:08:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:40.046 04:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:40.046 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:40.046 04:08:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.046 04:08:54 -- target/tls.sh@207 -- # bdevperf_pid=3841488 00:17:40.046 04:08:54 -- target/tls.sh@208 -- # waitforlisten 3841488 /var/tmp/bdevperf.sock 00:17:40.046 04:08:54 -- common/autotest_common.sh@817 -- # '[' -z 3841488 ']' 00:17:40.047 04:08:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.047 04:08:54 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:40.047 04:08:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.047 04:08:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.047 04:08:54 -- target/tls.sh@204 -- # echo '{ 00:17:40.047 "subsystems": [ 00:17:40.047 { 00:17:40.047 "subsystem": "keyring", 00:17:40.047 "config": [] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "iobuf", 00:17:40.047 "config": [ 00:17:40.047 { 00:17:40.047 "method": "iobuf_set_options", 00:17:40.047 "params": { 00:17:40.047 "small_pool_count": 8192, 00:17:40.047 "large_pool_count": 1024, 00:17:40.047 "small_bufsize": 8192, 00:17:40.047 "large_bufsize": 135168 00:17:40.047 } 00:17:40.047 } 00:17:40.047 ] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "sock", 00:17:40.047 "config": [ 00:17:40.047 { 00:17:40.047 "method": "sock_impl_set_options", 00:17:40.047 "params": { 00:17:40.047 "impl_name": "posix", 00:17:40.047 "recv_buf_size": 2097152, 00:17:40.047 "send_buf_size": 2097152, 00:17:40.047 "enable_recv_pipe": true, 00:17:40.047 "enable_quickack": false, 00:17:40.047 "enable_placement_id": 0, 00:17:40.047 "enable_zerocopy_send_server": true, 00:17:40.047 "enable_zerocopy_send_client": false, 00:17:40.047 "zerocopy_threshold": 0, 00:17:40.047 "tls_version": 0, 00:17:40.047 "enable_ktls": false 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "sock_impl_set_options", 00:17:40.047 "params": { 00:17:40.047 "impl_name": "ssl", 00:17:40.047 "recv_buf_size": 4096, 00:17:40.047 "send_buf_size": 4096, 00:17:40.047 "enable_recv_pipe": true, 00:17:40.047 "enable_quickack": false, 00:17:40.047 "enable_placement_id": 0, 00:17:40.047 "enable_zerocopy_send_server": true, 00:17:40.047 "enable_zerocopy_send_client": false, 00:17:40.047 "zerocopy_threshold": 0, 00:17:40.047 "tls_version": 0, 00:17:40.047 "enable_ktls": false 00:17:40.047 } 00:17:40.047 } 00:17:40.047 ] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "vmd", 00:17:40.047 "config": [] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "accel", 00:17:40.047 "config": [ 00:17:40.047 { 00:17:40.047 "method": "accel_set_options", 00:17:40.047 "params": { 00:17:40.047 "small_cache_size": 128, 00:17:40.047 "large_cache_size": 16, 00:17:40.047 "task_count": 2048, 00:17:40.047 "sequence_count": 2048, 00:17:40.047 "buf_count": 2048 00:17:40.047 } 00:17:40.047 } 00:17:40.047 ] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "bdev", 00:17:40.047 "config": [ 00:17:40.047 { 00:17:40.047 "method": "bdev_set_options", 00:17:40.047 "params": { 00:17:40.047 "bdev_io_pool_size": 65535, 00:17:40.047 "bdev_io_cache_size": 256, 00:17:40.047 "bdev_auto_examine": true, 00:17:40.047 "iobuf_small_cache_size": 128, 00:17:40.047 "iobuf_large_cache_size": 16 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_raid_set_options", 00:17:40.047 "params": { 00:17:40.047 "process_window_size_kb": 1024 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_iscsi_set_options", 00:17:40.047 "params": { 00:17:40.047 "timeout_sec": 30 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_nvme_set_options", 00:17:40.047 "params": { 00:17:40.047 "action_on_timeout": "none", 00:17:40.047 "timeout_us": 0, 00:17:40.047 "timeout_admin_us": 0, 00:17:40.047 "keep_alive_timeout_ms": 10000, 00:17:40.047 "arbitration_burst": 0, 00:17:40.047 "low_priority_weight": 0, 00:17:40.047 "medium_priority_weight": 0, 00:17:40.047 "high_priority_weight": 0, 00:17:40.047 "nvme_adminq_poll_period_us": 10000, 00:17:40.047 "nvme_ioq_poll_period_us": 0, 00:17:40.047 "io_queue_requests": 512, 00:17:40.047 "delay_cmd_submit": true, 00:17:40.047 "transport_retry_count": 4, 00:17:40.047 "bdev_retry_count": 3, 00:17:40.047 "transport_ack_timeout": 0, 00:17:40.047 "ctrlr_loss_timeout_sec": 0, 00:17:40.047 "reconnect_delay_sec": 0, 00:17:40.047 "fast_io_fail_timeout_sec": 0, 00:17:40.047 "disable_auto_failback": false, 00:17:40.047 "generate_uuids": false, 00:17:40.047 "transport_tos": 0, 00:17:40.047 "nvme_error_stat": false, 00:17:40.047 "rdma_srq_size": 0, 00:17:40.047 "io_path_stat": false, 00:17:40.047 "allow_accel_sequence": false, 00:17:40.047 "rdma_max_cq_size": 0, 00:17:40.047 "rdma_cm_event_timeout_ms": 0, 00:17:40.047 "dhchap_digests": [ 00:17:40.047 "sha256", 00:17:40.047 "sha384", 00:17:40.047 "sha512" 00:17:40.047 ], 00:17:40.047 "dhchap_dhgroups": [ 00:17:40.047 "null", 00:17:40.047 "ffdhe2048", 00:17:40.047 "ffdhe3072", 00:17:40.047 "ffdhe4096", 00:17:40.047 "ffdhe6144", 00:17:40.047 "ffdhe8192" 00:17:40.047 ] 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_nvme_attach_controller", 00:17:40.047 "params": { 00:17:40.047 "name": "TLSTEST", 00:17:40.047 "trtype": "TCP", 00:17:40.047 "adrfam": "IPv4", 00:17:40.047 "traddr": "10.0.0.2", 00:17:40.047 "trsvcid": "4420", 00:17:40.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.047 "prchk_reftag": false, 00:17:40.047 "prchk_guard": false, 00:17:40.047 "ctrlr_loss_timeout_sec": 0, 00:17:40.047 "reconnect_delay_sec": 0, 00:17:40.047 "fast_io_fail_timeout_sec": 0, 00:17:40.047 "psk": "/tmp/tmp.p2vZaM4iuw", 00:17:40.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.047 "hdgst": false, 00:17:40.047 "ddgst": false 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_nvme_set_hotplug", 00:17:40.047 "params": { 00:17:40.047 "period_us": 100000, 00:17:40.047 "enable": false 00:17:40.047 } 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "method": "bdev_wait_for_examine" 00:17:40.047 } 00:17:40.047 ] 00:17:40.047 }, 00:17:40.047 { 00:17:40.047 "subsystem": "nbd", 00:17:40.047 "config": [] 00:17:40.047 } 00:17:40.047 ] 00:17:40.047 }' 00:17:40.047 04:08:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.047 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:40.047 [2024-04-19 04:08:54.521107] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:40.047 [2024-04-19 04:08:54.521166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841488 ] 00:17:40.047 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.320 [2024-04-19 04:08:54.578776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.321 [2024-04-19 04:08:54.647274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.321 [2024-04-19 04:08:54.779897] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.321 [2024-04-19 04:08:54.779977] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.888 04:08:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.888 04:08:55 -- common/autotest_common.sh@850 -- # return 0 00:17:40.888 04:08:55 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:41.146 Running I/O for 10 seconds... 00:17:51.153 00:17:51.153 Latency(us) 00:17:51.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.153 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:51.153 Verification LBA range: start 0x0 length 0x2000 00:17:51.153 TLSTESTn1 : 10.02 5050.54 19.73 0.00 0.00 25301.28 8519.68 36223.53 00:17:51.153 =================================================================================================================== 00:17:51.153 Total : 5050.54 19.73 0.00 0.00 25301.28 8519.68 36223.53 00:17:51.153 0 00:17:51.153 04:09:05 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.153 04:09:05 -- target/tls.sh@214 -- # killprocess 3841488 00:17:51.153 04:09:05 -- common/autotest_common.sh@936 -- # '[' -z 3841488 ']' 00:17:51.153 04:09:05 -- common/autotest_common.sh@940 -- # kill -0 3841488 00:17:51.153 04:09:05 -- common/autotest_common.sh@941 -- # uname 00:17:51.153 04:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.153 04:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3841488 00:17:51.153 04:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:51.153 04:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:51.153 04:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3841488' 00:17:51.153 killing process with pid 3841488 00:17:51.153 04:09:05 -- common/autotest_common.sh@955 -- # kill 3841488 00:17:51.153 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.153 00:17:51.153 Latency(us) 00:17:51.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.153 =================================================================================================================== 00:17:51.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.153 [2024-04-19 04:09:05.615905] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:51.153 04:09:05 -- common/autotest_common.sh@960 -- # wait 3841488 00:17:51.412 04:09:05 -- target/tls.sh@215 -- # killprocess 3841213 00:17:51.412 04:09:05 -- common/autotest_common.sh@936 -- # '[' -z 3841213 ']' 00:17:51.412 04:09:05 -- common/autotest_common.sh@940 -- # kill -0 3841213 00:17:51.412 04:09:05 -- common/autotest_common.sh@941 -- # uname 00:17:51.412 04:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.412 04:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3841213 00:17:51.412 04:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:51.412 04:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:51.412 04:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3841213' 00:17:51.412 killing process with pid 3841213 00:17:51.412 04:09:05 -- common/autotest_common.sh@955 -- # kill 3841213 00:17:51.412 [2024-04-19 04:09:05.865026] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:51.412 04:09:05 -- common/autotest_common.sh@960 -- # wait 3841213 00:17:51.671 04:09:06 -- target/tls.sh@218 -- # nvmfappstart 00:17:51.671 04:09:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:51.671 04:09:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:51.671 04:09:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.671 04:09:06 -- nvmf/common.sh@470 -- # nvmfpid=3843798 00:17:51.671 04:09:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:51.671 04:09:06 -- nvmf/common.sh@471 -- # waitforlisten 3843798 00:17:51.671 04:09:06 -- common/autotest_common.sh@817 -- # '[' -z 3843798 ']' 00:17:51.671 04:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.671 04:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:51.671 04:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.671 04:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:51.671 04:09:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.671 [2024-04-19 04:09:06.159088] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:51.671 [2024-04-19 04:09:06.159151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.671 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.929 [2024-04-19 04:09:06.248040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.929 [2024-04-19 04:09:06.335623] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.929 [2024-04-19 04:09:06.335665] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.929 [2024-04-19 04:09:06.335675] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.929 [2024-04-19 04:09:06.335684] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.929 [2024-04-19 04:09:06.335692] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.929 [2024-04-19 04:09:06.335713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.495 04:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:52.495 04:09:07 -- common/autotest_common.sh@850 -- # return 0 00:17:52.495 04:09:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:52.495 04:09:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:52.495 04:09:07 -- common/autotest_common.sh@10 -- # set +x 00:17:52.753 04:09:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.753 04:09:07 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.p2vZaM4iuw 00:17:52.753 04:09:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.p2vZaM4iuw 00:17:52.753 04:09:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.753 [2024-04-19 04:09:07.273013] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.011 04:09:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.012 04:09:07 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.270 [2024-04-19 04:09:07.750290] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.270 [2024-04-19 04:09:07.750520] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.270 04:09:07 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.528 malloc0 00:17:53.528 04:09:08 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.787 04:09:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.p2vZaM4iuw 00:17:54.045 [2024-04-19 04:09:08.481547] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.045 04:09:08 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:54.045 04:09:08 -- target/tls.sh@222 -- # bdevperf_pid=3844405 00:17:54.045 04:09:08 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.045 04:09:08 -- target/tls.sh@225 -- # waitforlisten 3844405 /var/tmp/bdevperf.sock 00:17:54.045 04:09:08 -- common/autotest_common.sh@817 -- # '[' -z 3844405 ']' 00:17:54.045 04:09:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.045 04:09:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.045 04:09:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.045 04:09:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.045 04:09:08 -- common/autotest_common.sh@10 -- # set +x 00:17:54.045 [2024-04-19 04:09:08.537289] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:54.045 [2024-04-19 04:09:08.537351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844405 ] 00:17:54.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.304 [2024-04-19 04:09:08.611057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.304 [2024-04-19 04:09:08.702443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.304 04:09:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.304 04:09:08 -- common/autotest_common.sh@850 -- # return 0 00:17:54.304 04:09:08 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p2vZaM4iuw 00:17:54.561 04:09:09 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.819 [2024-04-19 04:09:09.259260] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.819 nvme0n1 00:17:55.078 04:09:09 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.078 Running I/O for 1 seconds... 00:17:56.012 00:17:56.012 Latency(us) 00:17:56.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:56.012 Verification LBA range: start 0x0 length 0x2000 00:17:56.012 nvme0n1 : 1.02 3620.29 14.14 0.00 0.00 34992.96 7596.22 67680.81 00:17:56.012 =================================================================================================================== 00:17:56.012 Total : 3620.29 14.14 0.00 0.00 34992.96 7596.22 67680.81 00:17:56.012 0 00:17:56.012 04:09:10 -- target/tls.sh@234 -- # killprocess 3844405 00:17:56.012 04:09:10 -- common/autotest_common.sh@936 -- # '[' -z 3844405 ']' 00:17:56.012 04:09:10 -- common/autotest_common.sh@940 -- # kill -0 3844405 00:17:56.012 04:09:10 -- common/autotest_common.sh@941 -- # uname 00:17:56.012 04:09:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.012 04:09:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844405 00:17:56.271 04:09:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:56.271 04:09:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:56.271 04:09:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844405' 00:17:56.271 killing process with pid 3844405 00:17:56.271 04:09:10 -- common/autotest_common.sh@955 -- # kill 3844405 00:17:56.271 Received shutdown signal, test time was about 1.000000 seconds 00:17:56.271 00:17:56.271 Latency(us) 00:17:56.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.271 =================================================================================================================== 00:17:56.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.271 04:09:10 -- common/autotest_common.sh@960 -- # wait 3844405 00:17:56.271 04:09:10 -- target/tls.sh@235 -- # killprocess 3843798 00:17:56.271 04:09:10 -- common/autotest_common.sh@936 -- # '[' -z 3843798 ']' 00:17:56.271 04:09:10 -- common/autotest_common.sh@940 -- # kill -0 3843798 00:17:56.271 04:09:10 -- common/autotest_common.sh@941 -- # uname 00:17:56.271 04:09:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.271 04:09:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3843798 00:17:56.530 04:09:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.530 04:09:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.530 04:09:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3843798' 00:17:56.530 killing process with pid 3843798 00:17:56.530 04:09:10 -- common/autotest_common.sh@955 -- # kill 3843798 00:17:56.530 [2024-04-19 04:09:10.832639] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:56.530 04:09:10 -- common/autotest_common.sh@960 -- # wait 3843798 00:17:56.789 04:09:11 -- target/tls.sh@238 -- # nvmfappstart 00:17:56.789 04:09:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:56.789 04:09:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:56.789 04:09:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.789 04:09:11 -- nvmf/common.sh@470 -- # nvmfpid=3844937 00:17:56.789 04:09:11 -- nvmf/common.sh@471 -- # waitforlisten 3844937 00:17:56.789 04:09:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:56.789 04:09:11 -- common/autotest_common.sh@817 -- # '[' -z 3844937 ']' 00:17:56.789 04:09:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.789 04:09:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.789 04:09:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.789 04:09:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.789 04:09:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.789 [2024-04-19 04:09:11.128099] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:56.789 [2024-04-19 04:09:11.128159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.789 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.789 [2024-04-19 04:09:11.213656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.789 [2024-04-19 04:09:11.301609] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.789 [2024-04-19 04:09:11.301651] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.789 [2024-04-19 04:09:11.301661] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.789 [2024-04-19 04:09:11.301670] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.789 [2024-04-19 04:09:11.301678] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.789 [2024-04-19 04:09:11.301698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.727 04:09:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.727 04:09:12 -- common/autotest_common.sh@850 -- # return 0 00:17:57.727 04:09:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:57.727 04:09:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:57.727 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.727 04:09:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.727 04:09:12 -- target/tls.sh@239 -- # rpc_cmd 00:17:57.727 04:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:57.727 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.727 [2024-04-19 04:09:12.098248] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.727 malloc0 00:17:57.727 [2024-04-19 04:09:12.127562] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:57.727 [2024-04-19 04:09:12.127782] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.727 04:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:57.727 04:09:12 -- target/tls.sh@252 -- # bdevperf_pid=3845211 00:17:57.727 04:09:12 -- target/tls.sh@254 -- # waitforlisten 3845211 /var/tmp/bdevperf.sock 00:17:57.727 04:09:12 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:57.727 04:09:12 -- common/autotest_common.sh@817 -- # '[' -z 3845211 ']' 00:17:57.727 04:09:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.727 04:09:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.727 04:09:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.727 04:09:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.727 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.727 [2024-04-19 04:09:12.204504] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:17:57.727 [2024-04-19 04:09:12.204557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845211 ] 00:17:57.727 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.986 [2024-04-19 04:09:12.277440] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.986 [2024-04-19 04:09:12.362808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.986 04:09:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.986 04:09:12 -- common/autotest_common.sh@850 -- # return 0 00:17:57.986 04:09:12 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p2vZaM4iuw 00:17:58.244 04:09:12 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:58.503 [2024-04-19 04:09:12.938924] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.503 nvme0n1 00:17:58.762 04:09:13 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.762 Running I/O for 1 seconds... 00:17:59.697 00:17:59.697 Latency(us) 00:17:59.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:59.697 Verification LBA range: start 0x0 length 0x2000 00:17:59.697 nvme0n1 : 1.03 3808.79 14.88 0.00 0.00 33169.65 8877.15 36461.85 00:17:59.697 =================================================================================================================== 00:17:59.697 Total : 3808.79 14.88 0.00 0.00 33169.65 8877.15 36461.85 00:17:59.697 0 00:17:59.697 04:09:14 -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:59.697 04:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.697 04:09:14 -- common/autotest_common.sh@10 -- # set +x 00:17:59.956 04:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.956 04:09:14 -- target/tls.sh@263 -- # tgtcfg='{ 00:17:59.956 "subsystems": [ 00:17:59.956 { 00:17:59.956 "subsystem": "keyring", 00:17:59.956 "config": [ 00:17:59.956 { 00:17:59.956 "method": "keyring_file_add_key", 00:17:59.956 "params": { 00:17:59.956 "name": "key0", 00:17:59.956 "path": "/tmp/tmp.p2vZaM4iuw" 00:17:59.956 } 00:17:59.956 } 00:17:59.956 ] 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "subsystem": "iobuf", 00:17:59.956 "config": [ 00:17:59.956 { 00:17:59.956 "method": "iobuf_set_options", 00:17:59.956 "params": { 00:17:59.956 "small_pool_count": 8192, 00:17:59.956 "large_pool_count": 1024, 00:17:59.956 "small_bufsize": 8192, 00:17:59.956 "large_bufsize": 135168 00:17:59.956 } 00:17:59.956 } 00:17:59.956 ] 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "subsystem": "sock", 00:17:59.956 "config": [ 00:17:59.956 { 00:17:59.956 "method": "sock_impl_set_options", 00:17:59.956 "params": { 00:17:59.956 "impl_name": "posix", 00:17:59.956 "recv_buf_size": 2097152, 00:17:59.956 "send_buf_size": 2097152, 00:17:59.956 "enable_recv_pipe": true, 00:17:59.956 "enable_quickack": false, 00:17:59.956 "enable_placement_id": 0, 00:17:59.956 "enable_zerocopy_send_server": true, 00:17:59.956 "enable_zerocopy_send_client": false, 00:17:59.956 "zerocopy_threshold": 0, 00:17:59.956 "tls_version": 0, 00:17:59.956 "enable_ktls": false 00:17:59.956 } 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "method": "sock_impl_set_options", 00:17:59.956 "params": { 00:17:59.956 "impl_name": "ssl", 00:17:59.956 "recv_buf_size": 4096, 00:17:59.956 "send_buf_size": 4096, 00:17:59.956 "enable_recv_pipe": true, 00:17:59.956 "enable_quickack": false, 00:17:59.956 "enable_placement_id": 0, 00:17:59.956 "enable_zerocopy_send_server": true, 00:17:59.956 "enable_zerocopy_send_client": false, 00:17:59.956 "zerocopy_threshold": 0, 00:17:59.956 "tls_version": 0, 00:17:59.956 "enable_ktls": false 00:17:59.956 } 00:17:59.956 } 00:17:59.956 ] 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "subsystem": "vmd", 00:17:59.956 "config": [] 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "subsystem": "accel", 00:17:59.956 "config": [ 00:17:59.956 { 00:17:59.956 "method": "accel_set_options", 00:17:59.956 "params": { 00:17:59.956 "small_cache_size": 128, 00:17:59.956 "large_cache_size": 16, 00:17:59.956 "task_count": 2048, 00:17:59.956 "sequence_count": 2048, 00:17:59.956 "buf_count": 2048 00:17:59.956 } 00:17:59.956 } 00:17:59.956 ] 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "subsystem": "bdev", 00:17:59.956 "config": [ 00:17:59.956 { 00:17:59.956 "method": "bdev_set_options", 00:17:59.956 "params": { 00:17:59.956 "bdev_io_pool_size": 65535, 00:17:59.956 "bdev_io_cache_size": 256, 00:17:59.956 "bdev_auto_examine": true, 00:17:59.956 "iobuf_small_cache_size": 128, 00:17:59.956 "iobuf_large_cache_size": 16 00:17:59.956 } 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "method": "bdev_raid_set_options", 00:17:59.956 "params": { 00:17:59.956 "process_window_size_kb": 1024 00:17:59.956 } 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "method": "bdev_iscsi_set_options", 00:17:59.956 "params": { 00:17:59.956 "timeout_sec": 30 00:17:59.956 } 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "method": "bdev_nvme_set_options", 00:17:59.956 "params": { 00:17:59.956 "action_on_timeout": "none", 00:17:59.956 "timeout_us": 0, 00:17:59.956 "timeout_admin_us": 0, 00:17:59.956 "keep_alive_timeout_ms": 10000, 00:17:59.956 "arbitration_burst": 0, 00:17:59.956 "low_priority_weight": 0, 00:17:59.956 "medium_priority_weight": 0, 00:17:59.956 "high_priority_weight": 0, 00:17:59.956 "nvme_adminq_poll_period_us": 10000, 00:17:59.956 "nvme_ioq_poll_period_us": 0, 00:17:59.956 "io_queue_requests": 0, 00:17:59.956 "delay_cmd_submit": true, 00:17:59.956 "transport_retry_count": 4, 00:17:59.956 "bdev_retry_count": 3, 00:17:59.956 "transport_ack_timeout": 0, 00:17:59.956 "ctrlr_loss_timeout_sec": 0, 00:17:59.956 "reconnect_delay_sec": 0, 00:17:59.956 "fast_io_fail_timeout_sec": 0, 00:17:59.956 "disable_auto_failback": false, 00:17:59.956 "generate_uuids": false, 00:17:59.956 "transport_tos": 0, 00:17:59.956 "nvme_error_stat": false, 00:17:59.956 "rdma_srq_size": 0, 00:17:59.956 "io_path_stat": false, 00:17:59.956 "allow_accel_sequence": false, 00:17:59.956 "rdma_max_cq_size": 0, 00:17:59.956 "rdma_cm_event_timeout_ms": 0, 00:17:59.956 "dhchap_digests": [ 00:17:59.956 "sha256", 00:17:59.956 "sha384", 00:17:59.956 "sha512" 00:17:59.956 ], 00:17:59.956 "dhchap_dhgroups": [ 00:17:59.956 "null", 00:17:59.956 "ffdhe2048", 00:17:59.956 "ffdhe3072", 00:17:59.956 "ffdhe4096", 00:17:59.956 "ffdhe6144", 00:17:59.956 "ffdhe8192" 00:17:59.956 ] 00:17:59.956 } 00:17:59.956 }, 00:17:59.956 { 00:17:59.956 "method": "bdev_nvme_set_hotplug", 00:17:59.956 "params": { 00:17:59.956 "period_us": 100000, 00:17:59.957 "enable": false 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "bdev_malloc_create", 00:17:59.957 "params": { 00:17:59.957 "name": "malloc0", 00:17:59.957 "num_blocks": 8192, 00:17:59.957 "block_size": 4096, 00:17:59.957 "physical_block_size": 4096, 00:17:59.957 "uuid": "691a4918-7955-448b-8ae9-ed9da04e8793", 00:17:59.957 "optimal_io_boundary": 0 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "bdev_wait_for_examine" 00:17:59.957 } 00:17:59.957 ] 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "subsystem": "nbd", 00:17:59.957 "config": [] 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "subsystem": "scheduler", 00:17:59.957 "config": [ 00:17:59.957 { 00:17:59.957 "method": "framework_set_scheduler", 00:17:59.957 "params": { 00:17:59.957 "name": "static" 00:17:59.957 } 00:17:59.957 } 00:17:59.957 ] 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "subsystem": "nvmf", 00:17:59.957 "config": [ 00:17:59.957 { 00:17:59.957 "method": "nvmf_set_config", 00:17:59.957 "params": { 00:17:59.957 "discovery_filter": "match_any", 00:17:59.957 "admin_cmd_passthru": { 00:17:59.957 "identify_ctrlr": false 00:17:59.957 } 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_set_max_subsystems", 00:17:59.957 "params": { 00:17:59.957 "max_subsystems": 1024 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_set_crdt", 00:17:59.957 "params": { 00:17:59.957 "crdt1": 0, 00:17:59.957 "crdt2": 0, 00:17:59.957 "crdt3": 0 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_create_transport", 00:17:59.957 "params": { 00:17:59.957 "trtype": "TCP", 00:17:59.957 "max_queue_depth": 128, 00:17:59.957 "max_io_qpairs_per_ctrlr": 127, 00:17:59.957 "in_capsule_data_size": 4096, 00:17:59.957 "max_io_size": 131072, 00:17:59.957 "io_unit_size": 131072, 00:17:59.957 "max_aq_depth": 128, 00:17:59.957 "num_shared_buffers": 511, 00:17:59.957 "buf_cache_size": 4294967295, 00:17:59.957 "dif_insert_or_strip": false, 00:17:59.957 "zcopy": false, 00:17:59.957 "c2h_success": false, 00:17:59.957 "sock_priority": 0, 00:17:59.957 "abort_timeout_sec": 1, 00:17:59.957 "ack_timeout": 0 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_create_subsystem", 00:17:59.957 "params": { 00:17:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.957 "allow_any_host": false, 00:17:59.957 "serial_number": "00000000000000000000", 00:17:59.957 "model_number": "SPDK bdev Controller", 00:17:59.957 "max_namespaces": 32, 00:17:59.957 "min_cntlid": 1, 00:17:59.957 "max_cntlid": 65519, 00:17:59.957 "ana_reporting": false 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_subsystem_add_host", 00:17:59.957 "params": { 00:17:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.957 "host": "nqn.2016-06.io.spdk:host1", 00:17:59.957 "psk": "key0" 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_subsystem_add_ns", 00:17:59.957 "params": { 00:17:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.957 "namespace": { 00:17:59.957 "nsid": 1, 00:17:59.957 "bdev_name": "malloc0", 00:17:59.957 "nguid": "691A49187955448B8AE9ED9DA04E8793", 00:17:59.957 "uuid": "691a4918-7955-448b-8ae9-ed9da04e8793", 00:17:59.957 "no_auto_visible": false 00:17:59.957 } 00:17:59.957 } 00:17:59.957 }, 00:17:59.957 { 00:17:59.957 "method": "nvmf_subsystem_add_listener", 00:17:59.957 "params": { 00:17:59.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.957 "listen_address": { 00:17:59.957 "trtype": "TCP", 00:17:59.957 "adrfam": "IPv4", 00:17:59.957 "traddr": "10.0.0.2", 00:17:59.957 "trsvcid": "4420" 00:17:59.957 }, 00:17:59.957 "secure_channel": true 00:17:59.957 } 00:17:59.957 } 00:17:59.957 ] 00:17:59.957 } 00:17:59.957 ] 00:17:59.957 }' 00:17:59.957 04:09:14 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:00.216 04:09:14 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:00.217 "subsystems": [ 00:18:00.217 { 00:18:00.217 "subsystem": "keyring", 00:18:00.217 "config": [ 00:18:00.217 { 00:18:00.217 "method": "keyring_file_add_key", 00:18:00.217 "params": { 00:18:00.217 "name": "key0", 00:18:00.217 "path": "/tmp/tmp.p2vZaM4iuw" 00:18:00.217 } 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "iobuf", 00:18:00.217 "config": [ 00:18:00.217 { 00:18:00.217 "method": "iobuf_set_options", 00:18:00.217 "params": { 00:18:00.217 "small_pool_count": 8192, 00:18:00.217 "large_pool_count": 1024, 00:18:00.217 "small_bufsize": 8192, 00:18:00.217 "large_bufsize": 135168 00:18:00.217 } 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "sock", 00:18:00.217 "config": [ 00:18:00.217 { 00:18:00.217 "method": "sock_impl_set_options", 00:18:00.217 "params": { 00:18:00.217 "impl_name": "posix", 00:18:00.217 "recv_buf_size": 2097152, 00:18:00.217 "send_buf_size": 2097152, 00:18:00.217 "enable_recv_pipe": true, 00:18:00.217 "enable_quickack": false, 00:18:00.217 "enable_placement_id": 0, 00:18:00.217 "enable_zerocopy_send_server": true, 00:18:00.217 "enable_zerocopy_send_client": false, 00:18:00.217 "zerocopy_threshold": 0, 00:18:00.217 "tls_version": 0, 00:18:00.217 "enable_ktls": false 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "sock_impl_set_options", 00:18:00.217 "params": { 00:18:00.217 "impl_name": "ssl", 00:18:00.217 "recv_buf_size": 4096, 00:18:00.217 "send_buf_size": 4096, 00:18:00.217 "enable_recv_pipe": true, 00:18:00.217 "enable_quickack": false, 00:18:00.217 "enable_placement_id": 0, 00:18:00.217 "enable_zerocopy_send_server": true, 00:18:00.217 "enable_zerocopy_send_client": false, 00:18:00.217 "zerocopy_threshold": 0, 00:18:00.217 "tls_version": 0, 00:18:00.217 "enable_ktls": false 00:18:00.217 } 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "vmd", 00:18:00.217 "config": [] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "accel", 00:18:00.217 "config": [ 00:18:00.217 { 00:18:00.217 "method": "accel_set_options", 00:18:00.217 "params": { 00:18:00.217 "small_cache_size": 128, 00:18:00.217 "large_cache_size": 16, 00:18:00.217 "task_count": 2048, 00:18:00.217 "sequence_count": 2048, 00:18:00.217 "buf_count": 2048 00:18:00.217 } 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "bdev", 00:18:00.217 "config": [ 00:18:00.217 { 00:18:00.217 "method": "bdev_set_options", 00:18:00.217 "params": { 00:18:00.217 "bdev_io_pool_size": 65535, 00:18:00.217 "bdev_io_cache_size": 256, 00:18:00.217 "bdev_auto_examine": true, 00:18:00.217 "iobuf_small_cache_size": 128, 00:18:00.217 "iobuf_large_cache_size": 16 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_raid_set_options", 00:18:00.217 "params": { 00:18:00.217 "process_window_size_kb": 1024 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_iscsi_set_options", 00:18:00.217 "params": { 00:18:00.217 "timeout_sec": 30 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_nvme_set_options", 00:18:00.217 "params": { 00:18:00.217 "action_on_timeout": "none", 00:18:00.217 "timeout_us": 0, 00:18:00.217 "timeout_admin_us": 0, 00:18:00.217 "keep_alive_timeout_ms": 10000, 00:18:00.217 "arbitration_burst": 0, 00:18:00.217 "low_priority_weight": 0, 00:18:00.217 "medium_priority_weight": 0, 00:18:00.217 "high_priority_weight": 0, 00:18:00.217 "nvme_adminq_poll_period_us": 10000, 00:18:00.217 "nvme_ioq_poll_period_us": 0, 00:18:00.217 "io_queue_requests": 512, 00:18:00.217 "delay_cmd_submit": true, 00:18:00.217 "transport_retry_count": 4, 00:18:00.217 "bdev_retry_count": 3, 00:18:00.217 "transport_ack_timeout": 0, 00:18:00.217 "ctrlr_loss_timeout_sec": 0, 00:18:00.217 "reconnect_delay_sec": 0, 00:18:00.217 "fast_io_fail_timeout_sec": 0, 00:18:00.217 "disable_auto_failback": false, 00:18:00.217 "generate_uuids": false, 00:18:00.217 "transport_tos": 0, 00:18:00.217 "nvme_error_stat": false, 00:18:00.217 "rdma_srq_size": 0, 00:18:00.217 "io_path_stat": false, 00:18:00.217 "allow_accel_sequence": false, 00:18:00.217 "rdma_max_cq_size": 0, 00:18:00.217 "rdma_cm_event_timeout_ms": 0, 00:18:00.217 "dhchap_digests": [ 00:18:00.217 "sha256", 00:18:00.217 "sha384", 00:18:00.217 "sha512" 00:18:00.217 ], 00:18:00.217 "dhchap_dhgroups": [ 00:18:00.217 "null", 00:18:00.217 "ffdhe2048", 00:18:00.217 "ffdhe3072", 00:18:00.217 "ffdhe4096", 00:18:00.217 "ffdhe6144", 00:18:00.217 "ffdhe8192" 00:18:00.217 ] 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_nvme_attach_controller", 00:18:00.217 "params": { 00:18:00.217 "name": "nvme0", 00:18:00.217 "trtype": "TCP", 00:18:00.217 "adrfam": "IPv4", 00:18:00.217 "traddr": "10.0.0.2", 00:18:00.217 "trsvcid": "4420", 00:18:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.217 "prchk_reftag": false, 00:18:00.217 "prchk_guard": false, 00:18:00.217 "ctrlr_loss_timeout_sec": 0, 00:18:00.217 "reconnect_delay_sec": 0, 00:18:00.217 "fast_io_fail_timeout_sec": 0, 00:18:00.217 "psk": "key0", 00:18:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:00.217 "hdgst": false, 00:18:00.217 "ddgst": false 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_nvme_set_hotplug", 00:18:00.217 "params": { 00:18:00.217 "period_us": 100000, 00:18:00.217 "enable": false 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_enable_histogram", 00:18:00.217 "params": { 00:18:00.217 "name": "nvme0n1", 00:18:00.217 "enable": true 00:18:00.217 } 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "method": "bdev_wait_for_examine" 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }, 00:18:00.217 { 00:18:00.217 "subsystem": "nbd", 00:18:00.217 "config": [] 00:18:00.217 } 00:18:00.217 ] 00:18:00.217 }' 00:18:00.217 04:09:14 -- target/tls.sh@266 -- # killprocess 3845211 00:18:00.217 04:09:14 -- common/autotest_common.sh@936 -- # '[' -z 3845211 ']' 00:18:00.217 04:09:14 -- common/autotest_common.sh@940 -- # kill -0 3845211 00:18:00.217 04:09:14 -- common/autotest_common.sh@941 -- # uname 00:18:00.217 04:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.217 04:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3845211 00:18:00.217 04:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.217 04:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.217 04:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3845211' 00:18:00.217 killing process with pid 3845211 00:18:00.217 04:09:14 -- common/autotest_common.sh@955 -- # kill 3845211 00:18:00.217 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.217 00:18:00.217 Latency(us) 00:18:00.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.217 =================================================================================================================== 00:18:00.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.217 04:09:14 -- common/autotest_common.sh@960 -- # wait 3845211 00:18:00.476 04:09:14 -- target/tls.sh@267 -- # killprocess 3844937 00:18:00.476 04:09:14 -- common/autotest_common.sh@936 -- # '[' -z 3844937 ']' 00:18:00.476 04:09:14 -- common/autotest_common.sh@940 -- # kill -0 3844937 00:18:00.476 04:09:14 -- common/autotest_common.sh@941 -- # uname 00:18:00.476 04:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.476 04:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3844937 00:18:00.476 04:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.476 04:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.476 04:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3844937' 00:18:00.476 killing process with pid 3844937 00:18:00.476 04:09:14 -- common/autotest_common.sh@955 -- # kill 3844937 00:18:00.476 04:09:14 -- common/autotest_common.sh@960 -- # wait 3844937 00:18:00.736 04:09:15 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:00.736 04:09:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.736 04:09:15 -- target/tls.sh@269 -- # echo '{ 00:18:00.736 "subsystems": [ 00:18:00.736 { 00:18:00.736 "subsystem": "keyring", 00:18:00.736 "config": [ 00:18:00.736 { 00:18:00.736 "method": "keyring_file_add_key", 00:18:00.736 "params": { 00:18:00.736 "name": "key0", 00:18:00.736 "path": "/tmp/tmp.p2vZaM4iuw" 00:18:00.736 } 00:18:00.736 } 00:18:00.736 ] 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "subsystem": "iobuf", 00:18:00.736 "config": [ 00:18:00.736 { 00:18:00.736 "method": "iobuf_set_options", 00:18:00.736 "params": { 00:18:00.736 "small_pool_count": 8192, 00:18:00.736 "large_pool_count": 1024, 00:18:00.736 "small_bufsize": 8192, 00:18:00.736 "large_bufsize": 135168 00:18:00.736 } 00:18:00.736 } 00:18:00.736 ] 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "subsystem": "sock", 00:18:00.736 "config": [ 00:18:00.736 { 00:18:00.736 "method": "sock_impl_set_options", 00:18:00.736 "params": { 00:18:00.736 "impl_name": "posix", 00:18:00.736 "recv_buf_size": 2097152, 00:18:00.736 "send_buf_size": 2097152, 00:18:00.736 "enable_recv_pipe": true, 00:18:00.736 "enable_quickack": false, 00:18:00.736 "enable_placement_id": 0, 00:18:00.736 "enable_zerocopy_send_server": true, 00:18:00.736 "enable_zerocopy_send_client": false, 00:18:00.736 "zerocopy_threshold": 0, 00:18:00.736 "tls_version": 0, 00:18:00.736 "enable_ktls": false 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "sock_impl_set_options", 00:18:00.736 "params": { 00:18:00.736 "impl_name": "ssl", 00:18:00.736 "recv_buf_size": 4096, 00:18:00.736 "send_buf_size": 4096, 00:18:00.736 "enable_recv_pipe": true, 00:18:00.736 "enable_quickack": false, 00:18:00.736 "enable_placement_id": 0, 00:18:00.736 "enable_zerocopy_send_server": true, 00:18:00.736 "enable_zerocopy_send_client": false, 00:18:00.736 "zerocopy_threshold": 0, 00:18:00.736 "tls_version": 0, 00:18:00.736 "enable_ktls": false 00:18:00.736 } 00:18:00.736 } 00:18:00.736 ] 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "subsystem": "vmd", 00:18:00.736 "config": [] 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "subsystem": "accel", 00:18:00.736 "config": [ 00:18:00.736 { 00:18:00.736 "method": "accel_set_options", 00:18:00.736 "params": { 00:18:00.736 "small_cache_size": 128, 00:18:00.736 "large_cache_size": 16, 00:18:00.736 "task_count": 2048, 00:18:00.736 "sequence_count": 2048, 00:18:00.736 "buf_count": 2048 00:18:00.736 } 00:18:00.736 } 00:18:00.736 ] 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "subsystem": "bdev", 00:18:00.736 "config": [ 00:18:00.736 { 00:18:00.736 "method": "bdev_set_options", 00:18:00.736 "params": { 00:18:00.736 "bdev_io_pool_size": 65535, 00:18:00.736 "bdev_io_cache_size": 256, 00:18:00.736 "bdev_auto_examine": true, 00:18:00.736 "iobuf_small_cache_size": 128, 00:18:00.736 "iobuf_large_cache_size": 16 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "bdev_raid_set_options", 00:18:00.736 "params": { 00:18:00.736 "process_window_size_kb": 1024 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "bdev_iscsi_set_options", 00:18:00.736 "params": { 00:18:00.736 "timeout_sec": 30 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "bdev_nvme_set_options", 00:18:00.736 "params": { 00:18:00.736 "action_on_timeout": "none", 00:18:00.736 "timeout_us": 0, 00:18:00.736 "timeout_admin_us": 0, 00:18:00.736 "keep_alive_timeout_ms": 10000, 00:18:00.736 "arbitration_burst": 0, 00:18:00.736 "low_priority_weight": 0, 00:18:00.736 "medium_priority_weight": 0, 00:18:00.736 "high_priority_weight": 0, 00:18:00.736 "nvme_adminq_poll_period_us": 10000, 00:18:00.736 "nvme_ioq_poll_period_us": 0, 00:18:00.736 "io_queue_requests": 0, 00:18:00.736 "delay_cmd_submit": true, 00:18:00.736 "transport_retry_count": 4, 00:18:00.736 "bdev_retry_count": 3, 00:18:00.736 "transport_ack_timeout": 0, 00:18:00.736 "ctrlr_loss_timeout_sec": 0, 00:18:00.736 "reconnect_delay_sec": 0, 00:18:00.736 "fast_io_fail_timeout_sec": 0, 00:18:00.736 "disable_auto_failback": false, 00:18:00.736 "generate_uuids": false, 00:18:00.736 "transport_tos": 0, 00:18:00.736 "nvme_error_stat": false, 00:18:00.736 "rdma_srq_size": 0, 00:18:00.736 "io_path_stat": false, 00:18:00.736 "allow_accel_sequence": false, 00:18:00.736 "rdma_max_cq_size": 0, 00:18:00.736 "rdma_cm_event_timeout_ms": 0, 00:18:00.736 "dhchap_digests": [ 00:18:00.736 "sha256", 00:18:00.736 "sha384", 00:18:00.736 "sha512" 00:18:00.736 ], 00:18:00.736 "dhchap_dhgroups": [ 00:18:00.736 "null", 00:18:00.736 "ffdhe2048", 00:18:00.736 "ffdhe3072", 00:18:00.736 "ffdhe4096", 00:18:00.736 "ffdhe6144", 00:18:00.736 "ffdhe8192" 00:18:00.736 ] 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "bdev_nvme_set_hotplug", 00:18:00.736 "params": { 00:18:00.736 "period_us": 100000, 00:18:00.736 "enable": false 00:18:00.736 } 00:18:00.736 }, 00:18:00.736 { 00:18:00.736 "method": "bdev_malloc_create", 00:18:00.737 "params": { 00:18:00.737 "name": "malloc0", 00:18:00.737 "num_blocks": 8192, 00:18:00.737 "block_size": 4096, 00:18:00.737 "physical_block_size": 4096, 00:18:00.737 "uuid": "691a4918-7955-448b-8ae9-ed9da04e8793", 00:18:00.737 "optimal_io_boundary": 0 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "bdev_wait_for_examine" 00:18:00.737 } 00:18:00.737 ] 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "subsystem": "nbd", 00:18:00.737 "config": [] 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "subsystem": "scheduler", 00:18:00.737 "config": [ 00:18:00.737 { 00:18:00.737 "method": "framework_set_scheduler", 00:18:00.737 "params": { 00:18:00.737 "name": "static" 00:18:00.737 } 00:18:00.737 } 00:18:00.737 ] 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "subsystem": "nvmf", 00:18:00.737 "config": [ 00:18:00.737 { 00:18:00.737 "method": "nvmf_set_config", 00:18:00.737 "params": { 00:18:00.737 "discovery_filter": "match_any", 00:18:00.737 "admin_cmd_passthru": { 00:18:00.737 "identify_ctrlr": false 00:18:00.737 } 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_set_max_subsystems", 00:18:00.737 "params": { 00:18:00.737 "max_subsystems": 1024 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_set_crdt", 00:18:00.737 "params": { 00:18:00.737 "crdt1": 0, 00:18:00.737 "crdt2": 0, 00:18:00.737 "crdt3": 0 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_create_transport", 00:18:00.737 "params": { 00:18:00.737 "trtype": "TCP", 00:18:00.737 "max_queue_depth": 128, 00:18:00.737 "max_io_qpairs_per_ctrlr": 127, 00:18:00.737 "in_capsule_data_size": 4096, 00:18:00.737 "max_io_size": 131072, 00:18:00.737 "io_unit_size": 131072, 00:18:00.737 "max_aq_depth": 128, 00:18:00.737 "num_shared_buffers": 511, 00:18:00.737 "buf_cache_size": 4294967295, 00:18:00.737 "dif_insert_or_strip": false, 00:18:00.737 "zcopy": false, 00:18:00.737 "c2h_success": false, 00:18:00.737 "sock_priority": 0, 00:18:00.737 "abort_timeout_sec": 1, 00:18:00.737 "ack_timeout": 0 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_create_subsystem", 00:18:00.737 "params": { 00:18:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.737 "allow_any_host": false, 00:18:00.737 "serial_number": "00000000000000000000", 00:18:00.737 "model_number": "SPDK bdev Controller", 00:18:00.737 "max_namespaces": 32, 00:18:00.737 "min_cntlid": 1, 00:18:00.737 "max_cntlid": 65519, 00:18:00.737 "ana_reporting": false 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_subsystem_add_host", 00:18:00.737 "params": { 00:18:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.737 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.737 "psk": "key0" 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_subsystem_add_ns", 00:18:00.737 "params": { 00:18:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.737 "namespace": { 00:18:00.737 "nsid": 1, 00:18:00.737 "bdev_name": "malloc0", 00:18:00.737 "nguid": "691A49187955448B8AE9ED9DA04E8793", 00:18:00.737 "uuid": "691a4918-7955-448b-8ae9-ed9da04e8793", 00:18:00.737 "no_auto_visible": false 00:18:00.737 } 00:18:00.737 } 00:18:00.737 }, 00:18:00.737 { 00:18:00.737 "method": "nvmf_subsystem_add_listener", 00:18:00.737 "params": { 00:18:00.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.737 "listen_address": { 00:18:00.737 "trtype": "TCP", 00:18:00.737 "adrfam": "IPv4", 00:18:00.737 "traddr": "10.0.0.2", 00:18:00.737 "trsvcid": "4420" 00:18:00.737 }, 00:18:00.737 "secure_channel": true 00:18:00.737 } 00:18:00.737 } 00:18:00.737 ] 00:18:00.737 } 00:18:00.737 ] 00:18:00.737 }' 00:18:00.737 04:09:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.737 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:18:00.737 04:09:15 -- nvmf/common.sh@470 -- # nvmfpid=3845755 00:18:00.737 04:09:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:00.737 04:09:15 -- nvmf/common.sh@471 -- # waitforlisten 3845755 00:18:00.737 04:09:15 -- common/autotest_common.sh@817 -- # '[' -z 3845755 ']' 00:18:00.737 04:09:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.737 04:09:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.737 04:09:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.737 04:09:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.737 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:18:00.737 [2024-04-19 04:09:15.231516] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:00.737 [2024-04-19 04:09:15.231580] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.996 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.996 [2024-04-19 04:09:15.316953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.996 [2024-04-19 04:09:15.404385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.997 [2024-04-19 04:09:15.404429] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.997 [2024-04-19 04:09:15.404445] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.997 [2024-04-19 04:09:15.404453] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.997 [2024-04-19 04:09:15.404461] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.997 [2024-04-19 04:09:15.404524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.255 [2024-04-19 04:09:15.615477] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.255 [2024-04-19 04:09:15.647493] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.255 [2024-04-19 04:09:15.660696] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.824 04:09:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.824 04:09:16 -- common/autotest_common.sh@850 -- # return 0 00:18:01.824 04:09:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:01.824 04:09:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.824 04:09:16 -- common/autotest_common.sh@10 -- # set +x 00:18:01.824 04:09:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.824 04:09:16 -- target/tls.sh@272 -- # bdevperf_pid=3845908 00:18:01.824 04:09:16 -- target/tls.sh@273 -- # waitforlisten 3845908 /var/tmp/bdevperf.sock 00:18:01.824 04:09:16 -- common/autotest_common.sh@817 -- # '[' -z 3845908 ']' 00:18:01.824 04:09:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.824 04:09:16 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:01.824 04:09:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.824 04:09:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.824 04:09:16 -- target/tls.sh@270 -- # echo '{ 00:18:01.824 "subsystems": [ 00:18:01.824 { 00:18:01.824 "subsystem": "keyring", 00:18:01.824 "config": [ 00:18:01.824 { 00:18:01.824 "method": "keyring_file_add_key", 00:18:01.824 "params": { 00:18:01.824 "name": "key0", 00:18:01.824 "path": "/tmp/tmp.p2vZaM4iuw" 00:18:01.824 } 00:18:01.824 } 00:18:01.824 ] 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "subsystem": "iobuf", 00:18:01.824 "config": [ 00:18:01.824 { 00:18:01.824 "method": "iobuf_set_options", 00:18:01.824 "params": { 00:18:01.824 "small_pool_count": 8192, 00:18:01.824 "large_pool_count": 1024, 00:18:01.824 "small_bufsize": 8192, 00:18:01.824 "large_bufsize": 135168 00:18:01.824 } 00:18:01.824 } 00:18:01.824 ] 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "subsystem": "sock", 00:18:01.824 "config": [ 00:18:01.824 { 00:18:01.824 "method": "sock_impl_set_options", 00:18:01.824 "params": { 00:18:01.824 "impl_name": "posix", 00:18:01.824 "recv_buf_size": 2097152, 00:18:01.824 "send_buf_size": 2097152, 00:18:01.824 "enable_recv_pipe": true, 00:18:01.824 "enable_quickack": false, 00:18:01.824 "enable_placement_id": 0, 00:18:01.824 "enable_zerocopy_send_server": true, 00:18:01.824 "enable_zerocopy_send_client": false, 00:18:01.824 "zerocopy_threshold": 0, 00:18:01.824 "tls_version": 0, 00:18:01.824 "enable_ktls": false 00:18:01.824 } 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "method": "sock_impl_set_options", 00:18:01.824 "params": { 00:18:01.824 "impl_name": "ssl", 00:18:01.824 "recv_buf_size": 4096, 00:18:01.824 "send_buf_size": 4096, 00:18:01.824 "enable_recv_pipe": true, 00:18:01.824 "enable_quickack": false, 00:18:01.824 "enable_placement_id": 0, 00:18:01.824 "enable_zerocopy_send_server": true, 00:18:01.824 "enable_zerocopy_send_client": false, 00:18:01.824 "zerocopy_threshold": 0, 00:18:01.824 "tls_version": 0, 00:18:01.824 "enable_ktls": false 00:18:01.824 } 00:18:01.824 } 00:18:01.824 ] 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "subsystem": "vmd", 00:18:01.824 "config": [] 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "subsystem": "accel", 00:18:01.824 "config": [ 00:18:01.824 { 00:18:01.824 "method": "accel_set_options", 00:18:01.824 "params": { 00:18:01.824 "small_cache_size": 128, 00:18:01.824 "large_cache_size": 16, 00:18:01.824 "task_count": 2048, 00:18:01.824 "sequence_count": 2048, 00:18:01.824 "buf_count": 2048 00:18:01.824 } 00:18:01.824 } 00:18:01.824 ] 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "subsystem": "bdev", 00:18:01.824 "config": [ 00:18:01.824 { 00:18:01.824 "method": "bdev_set_options", 00:18:01.824 "params": { 00:18:01.824 "bdev_io_pool_size": 65535, 00:18:01.824 "bdev_io_cache_size": 256, 00:18:01.824 "bdev_auto_examine": true, 00:18:01.824 "iobuf_small_cache_size": 128, 00:18:01.824 "iobuf_large_cache_size": 16 00:18:01.824 } 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "method": "bdev_raid_set_options", 00:18:01.824 "params": { 00:18:01.824 "process_window_size_kb": 1024 00:18:01.824 } 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "method": "bdev_iscsi_set_options", 00:18:01.824 "params": { 00:18:01.824 "timeout_sec": 30 00:18:01.824 } 00:18:01.824 }, 00:18:01.824 { 00:18:01.824 "method": "bdev_nvme_set_options", 00:18:01.824 "params": { 00:18:01.824 "action_on_timeout": "none", 00:18:01.824 "timeout_us": 0, 00:18:01.824 "timeout_admin_us": 0, 00:18:01.824 "keep_alive_timeout_ms": 10000, 00:18:01.824 "arbitration_burst": 0, 00:18:01.824 "low_priority_weight": 0, 00:18:01.824 "medium_priority_weight": 0, 00:18:01.824 "high_priority_weight": 0, 00:18:01.824 "nvme_adminq_poll_period_us": 10000, 00:18:01.824 "nvme_ioq_poll_period_us": 0, 00:18:01.824 "io_queue_requests": 512, 00:18:01.824 "delay_cmd_submit": true, 00:18:01.824 "transport_retry_count": 4, 00:18:01.824 "bdev_retry_count": 3, 00:18:01.825 "transport_ack_timeout": 0, 00:18:01.825 "ctrlr_loss_timeout_sec": 0, 00:18:01.825 "reconnect_delay_sec": 0, 00:18:01.825 "fast_io_fail_timeout_sec": 0, 00:18:01.825 "disable_auto_failback": false, 00:18:01.825 "generate_uuids": false, 00:18:01.825 "transport_tos": 0, 00:18:01.825 "nvme_error_stat": false, 00:18:01.825 "rdma_srq_size": 0, 00:18:01.825 "io_path_stat": false, 00:18:01.825 "allow_accel_sequence": false, 00:18:01.825 "rdma_max_cq_size": 0, 00:18:01.825 "rdma_cm_event_timeout_ms": 0, 00:18:01.825 "dhchap_digests": [ 00:18:01.825 "sha256", 00:18:01.825 "sha384", 00:18:01.825 "sha512" 00:18:01.825 ], 00:18:01.825 "dhchap_dhgroups": [ 00:18:01.825 "null", 00:18:01.825 "ffdhe2048", 00:18:01.825 "ffdhe3072", 00:18:01.825 "ffdhe4096", 00:18:01.825 "ffdhe6144", 00:18:01.825 "ffdhe8192" 00:18:01.825 ] 00:18:01.825 } 00:18:01.825 }, 00:18:01.825 { 00:18:01.825 "method": "bdev_nvme_attach_controller", 00:18:01.825 "params": { 00:18:01.825 "name": "nvme0", 00:18:01.825 "trtype": "TCP", 00:18:01.825 "adrfam": "IPv4", 00:18:01.825 "traddr": "10.0.0.2", 00:18:01.825 "trsvcid": "4420", 00:18:01.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.825 "prchk_reftag": false, 00:18:01.825 "prchk_guard": false, 00:18:01.825 "ctrlr_loss_timeout_sec": 0, 00:18:01.825 "reconnect_delay_sec": 0, 00:18:01.825 "fast_io_fail_timeout_sec": 0, 00:18:01.825 "psk": "key0", 00:18:01.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.825 "hdgst": false, 00:18:01.825 "ddgst": false 00:18:01.825 } 00:18:01.825 }, 00:18:01.825 { 00:18:01.825 "method": "bdev_nvme_set_hotplug", 00:18:01.825 "params": { 00:18:01.825 "period_us": 100000, 00:18:01.825 "enable": false 00:18:01.825 } 00:18:01.825 }, 00:18:01.825 { 00:18:01.825 "method": "bdev_enable_histogram", 00:18:01.825 "params": { 00:18:01.825 "name": "nvme0n1", 00:18:01.825 "enable": true 00:18:01.825 } 00:18:01.825 }, 00:18:01.825 { 00:18:01.825 "method": "bdev_wait_for_examine" 00:18:01.825 } 00:18:01.825 ] 00:18:01.825 }, 00:18:01.825 { 00:18:01.825 "subsystem": "nbd", 00:18:01.825 "config": [] 00:18:01.825 } 00:18:01.825 ] 00:18:01.825 }' 00:18:01.825 04:09:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.825 04:09:16 -- common/autotest_common.sh@10 -- # set +x 00:18:01.825 [2024-04-19 04:09:16.246129] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:01.825 [2024-04-19 04:09:16.246193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845908 ] 00:18:01.825 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.825 [2024-04-19 04:09:16.320700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.084 [2024-04-19 04:09:16.409208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.084 [2024-04-19 04:09:16.558195] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.652 04:09:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.652 04:09:17 -- common/autotest_common.sh@850 -- # return 0 00:18:02.911 04:09:17 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:02.911 04:09:17 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:02.911 04:09:17 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.911 04:09:17 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:03.169 Running I/O for 1 seconds... 00:18:04.106 00:18:04.106 Latency(us) 00:18:04.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.106 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:04.106 Verification LBA range: start 0x0 length 0x2000 00:18:04.106 nvme0n1 : 1.02 3836.52 14.99 0.00 0.00 33023.16 8638.84 36700.16 00:18:04.106 =================================================================================================================== 00:18:04.106 Total : 3836.52 14.99 0.00 0.00 33023.16 8638.84 36700.16 00:18:04.106 0 00:18:04.106 04:09:18 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:04.106 04:09:18 -- target/tls.sh@279 -- # cleanup 00:18:04.106 04:09:18 -- target/tls.sh@15 -- # process_shm --id 0 00:18:04.106 04:09:18 -- common/autotest_common.sh@794 -- # type=--id 00:18:04.106 04:09:18 -- common/autotest_common.sh@795 -- # id=0 00:18:04.106 04:09:18 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:04.106 04:09:18 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:04.106 04:09:18 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:04.106 04:09:18 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:04.106 04:09:18 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:04.106 04:09:18 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:04.106 nvmf_trace.0 00:18:04.370 04:09:18 -- common/autotest_common.sh@809 -- # return 0 00:18:04.370 04:09:18 -- target/tls.sh@16 -- # killprocess 3845908 00:18:04.370 04:09:18 -- common/autotest_common.sh@936 -- # '[' -z 3845908 ']' 00:18:04.370 04:09:18 -- common/autotest_common.sh@940 -- # kill -0 3845908 00:18:04.370 04:09:18 -- common/autotest_common.sh@941 -- # uname 00:18:04.370 04:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.370 04:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3845908 00:18:04.370 04:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.370 04:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.370 04:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3845908' 00:18:04.370 killing process with pid 3845908 00:18:04.370 04:09:18 -- common/autotest_common.sh@955 -- # kill 3845908 00:18:04.370 Received shutdown signal, test time was about 1.000000 seconds 00:18:04.370 00:18:04.370 Latency(us) 00:18:04.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.370 =================================================================================================================== 00:18:04.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.370 04:09:18 -- common/autotest_common.sh@960 -- # wait 3845908 00:18:04.647 04:09:18 -- target/tls.sh@17 -- # nvmftestfini 00:18:04.647 04:09:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:04.647 04:09:18 -- nvmf/common.sh@117 -- # sync 00:18:04.648 04:09:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.648 04:09:18 -- nvmf/common.sh@120 -- # set +e 00:18:04.648 04:09:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.648 04:09:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.648 rmmod nvme_tcp 00:18:04.648 rmmod nvme_fabrics 00:18:04.648 rmmod nvme_keyring 00:18:04.648 04:09:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.648 04:09:19 -- nvmf/common.sh@124 -- # set -e 00:18:04.648 04:09:19 -- nvmf/common.sh@125 -- # return 0 00:18:04.648 04:09:19 -- nvmf/common.sh@478 -- # '[' -n 3845755 ']' 00:18:04.648 04:09:19 -- nvmf/common.sh@479 -- # killprocess 3845755 00:18:04.648 04:09:19 -- common/autotest_common.sh@936 -- # '[' -z 3845755 ']' 00:18:04.648 04:09:19 -- common/autotest_common.sh@940 -- # kill -0 3845755 00:18:04.648 04:09:19 -- common/autotest_common.sh@941 -- # uname 00:18:04.648 04:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.648 04:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3845755 00:18:04.648 04:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:04.648 04:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:04.648 04:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3845755' 00:18:04.648 killing process with pid 3845755 00:18:04.648 04:09:19 -- common/autotest_common.sh@955 -- # kill 3845755 00:18:04.648 04:09:19 -- common/autotest_common.sh@960 -- # wait 3845755 00:18:04.920 04:09:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:04.920 04:09:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:04.920 04:09:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:04.920 04:09:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.920 04:09:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.920 04:09:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.920 04:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.920 04:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.456 04:09:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:07.456 04:09:21 -- target/tls.sh@18 -- # rm -f /tmp/tmp.7jVDmAUfg0 /tmp/tmp.gV6RP3K7CE /tmp/tmp.p2vZaM4iuw 00:18:07.456 00:18:07.456 real 1m23.053s 00:18:07.456 user 2m8.603s 00:18:07.456 sys 0m28.912s 00:18:07.456 04:09:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.456 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:07.456 ************************************ 00:18:07.456 END TEST nvmf_tls 00:18:07.456 ************************************ 00:18:07.456 04:09:21 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:07.456 04:09:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.456 04:09:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.456 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:07.456 ************************************ 00:18:07.456 START TEST nvmf_fips 00:18:07.456 ************************************ 00:18:07.456 04:09:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:07.456 * Looking for test storage... 00:18:07.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:07.456 04:09:21 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.456 04:09:21 -- nvmf/common.sh@7 -- # uname -s 00:18:07.456 04:09:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.456 04:09:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.456 04:09:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.456 04:09:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.456 04:09:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.456 04:09:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.456 04:09:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.456 04:09:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.456 04:09:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.456 04:09:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.456 04:09:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:07.456 04:09:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:07.456 04:09:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.456 04:09:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.456 04:09:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.456 04:09:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.456 04:09:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.456 04:09:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.456 04:09:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.456 04:09:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.456 04:09:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.456 04:09:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.456 04:09:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.456 04:09:21 -- paths/export.sh@5 -- # export PATH 00:18:07.456 04:09:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.456 04:09:21 -- nvmf/common.sh@47 -- # : 0 00:18:07.456 04:09:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.456 04:09:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.456 04:09:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.456 04:09:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.456 04:09:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.456 04:09:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.456 04:09:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.456 04:09:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.456 04:09:21 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:07.456 04:09:21 -- fips/fips.sh@89 -- # check_openssl_version 00:18:07.456 04:09:21 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:07.456 04:09:21 -- fips/fips.sh@85 -- # openssl version 00:18:07.456 04:09:21 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:07.456 04:09:21 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:07.456 04:09:21 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:07.456 04:09:21 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:07.456 04:09:21 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:07.456 04:09:21 -- scripts/common.sh@333 -- # IFS=.-: 00:18:07.456 04:09:21 -- scripts/common.sh@333 -- # read -ra ver1 00:18:07.456 04:09:21 -- scripts/common.sh@334 -- # IFS=.-: 00:18:07.456 04:09:21 -- scripts/common.sh@334 -- # read -ra ver2 00:18:07.456 04:09:21 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:07.456 04:09:21 -- scripts/common.sh@337 -- # ver1_l=3 00:18:07.456 04:09:21 -- scripts/common.sh@338 -- # ver2_l=3 00:18:07.456 04:09:21 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:07.456 04:09:21 -- scripts/common.sh@341 -- # case "$op" in 00:18:07.456 04:09:21 -- scripts/common.sh@345 -- # : 1 00:18:07.456 04:09:21 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:07.456 04:09:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # decimal 3 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=3 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 3 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # decimal 3 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=3 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 3 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:07.457 04:09:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:07.457 04:09:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:07.457 04:09:21 -- scripts/common.sh@361 -- # (( v++ )) 00:18:07.457 04:09:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # decimal 0 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=0 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 0 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # decimal 0 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=0 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 0 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:07.457 04:09:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:07.457 04:09:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:07.457 04:09:21 -- scripts/common.sh@361 -- # (( v++ )) 00:18:07.457 04:09:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # decimal 9 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=9 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 9 00:18:07.457 04:09:21 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # decimal 0 00:18:07.457 04:09:21 -- scripts/common.sh@350 -- # local d=0 00:18:07.457 04:09:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:07.457 04:09:21 -- scripts/common.sh@352 -- # echo 0 00:18:07.457 04:09:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:07.457 04:09:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:07.457 04:09:21 -- scripts/common.sh@364 -- # return 0 00:18:07.457 04:09:21 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:07.457 04:09:21 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:07.457 04:09:21 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:07.457 04:09:21 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:07.457 04:09:21 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:07.457 04:09:21 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:07.457 04:09:21 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:07.457 04:09:21 -- fips/fips.sh@113 -- # build_openssl_config 00:18:07.457 04:09:21 -- fips/fips.sh@37 -- # cat 00:18:07.457 04:09:21 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:07.457 04:09:21 -- fips/fips.sh@58 -- # cat - 00:18:07.457 04:09:21 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:07.457 04:09:21 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:07.457 04:09:21 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:07.457 04:09:21 -- fips/fips.sh@116 -- # openssl list -providers 00:18:07.457 04:09:21 -- fips/fips.sh@116 -- # grep name 00:18:07.457 04:09:21 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:07.457 04:09:21 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:07.457 04:09:21 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:07.457 04:09:21 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:07.457 04:09:21 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.457 04:09:21 -- fips/fips.sh@127 -- # : 00:18:07.457 04:09:21 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:07.457 04:09:21 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:07.457 04:09:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.457 04:09:21 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:07.457 04:09:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.457 04:09:21 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:07.457 04:09:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.457 04:09:21 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:07.457 04:09:21 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:07.457 04:09:21 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:07.457 Error setting digest 00:18:07.457 00E2B6C52B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:07.457 00E2B6C52B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:07.457 04:09:21 -- common/autotest_common.sh@641 -- # es=1 00:18:07.457 04:09:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.457 04:09:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.457 04:09:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.457 04:09:21 -- fips/fips.sh@130 -- # nvmftestinit 00:18:07.457 04:09:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:07.457 04:09:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.457 04:09:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:07.457 04:09:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:07.457 04:09:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:07.457 04:09:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.457 04:09:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.457 04:09:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.457 04:09:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:07.457 04:09:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:07.457 04:09:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.457 04:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:12.738 04:09:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:12.738 04:09:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.738 04:09:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.738 04:09:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.738 04:09:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.738 04:09:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.738 04:09:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.738 04:09:27 -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.738 04:09:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.738 04:09:27 -- nvmf/common.sh@296 -- # e810=() 00:18:12.738 04:09:27 -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.738 04:09:27 -- nvmf/common.sh@297 -- # x722=() 00:18:12.738 04:09:27 -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.738 04:09:27 -- nvmf/common.sh@298 -- # mlx=() 00:18:12.738 04:09:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.738 04:09:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.738 04:09:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.738 04:09:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.738 04:09:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.738 04:09:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.738 04:09:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.738 04:09:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.739 04:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:12.739 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:12.739 04:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.739 04:09:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:12.739 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:12.739 04:09:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.739 04:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.739 04:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.739 04:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:12.739 Found net devices under 0000:af:00.0: cvl_0_0 00:18:12.739 04:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.739 04:09:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.739 04:09:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.739 04:09:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.739 04:09:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:12.739 Found net devices under 0000:af:00.1: cvl_0_1 00:18:12.739 04:09:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.739 04:09:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:12.739 04:09:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:12.739 04:09:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:12.739 04:09:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.739 04:09:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.739 04:09:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.739 04:09:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.739 04:09:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.739 04:09:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.739 04:09:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.739 04:09:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.739 04:09:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.739 04:09:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.739 04:09:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.739 04:09:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.739 04:09:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.011 04:09:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.011 04:09:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.011 04:09:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.011 04:09:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.011 04:09:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.011 04:09:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.011 04:09:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:18:13.011 00:18:13.011 --- 10.0.0.2 ping statistics --- 00:18:13.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.011 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:13.011 04:09:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:13.011 00:18:13.011 --- 10.0.0.1 ping statistics --- 00:18:13.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.011 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:13.011 04:09:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.011 04:09:27 -- nvmf/common.sh@411 -- # return 0 00:18:13.011 04:09:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:13.011 04:09:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.011 04:09:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:13.011 04:09:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:13.011 04:09:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.011 04:09:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:13.011 04:09:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:13.011 04:09:27 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:13.011 04:09:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:13.011 04:09:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:13.011 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.011 04:09:27 -- nvmf/common.sh@470 -- # nvmfpid=3850075 00:18:13.011 04:09:27 -- nvmf/common.sh@471 -- # waitforlisten 3850075 00:18:13.011 04:09:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.011 04:09:27 -- common/autotest_common.sh@817 -- # '[' -z 3850075 ']' 00:18:13.011 04:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.011 04:09:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.011 04:09:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.011 04:09:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.011 04:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:13.275 [2024-04-19 04:09:27.570042] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:13.275 [2024-04-19 04:09:27.570099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.275 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.275 [2024-04-19 04:09:27.646606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.275 [2024-04-19 04:09:27.735871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.275 [2024-04-19 04:09:27.735910] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.275 [2024-04-19 04:09:27.735921] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.275 [2024-04-19 04:09:27.735929] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.275 [2024-04-19 04:09:27.735937] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.275 [2024-04-19 04:09:27.735956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.211 04:09:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.211 04:09:28 -- common/autotest_common.sh@850 -- # return 0 00:18:14.211 04:09:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:14.211 04:09:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:14.211 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.211 04:09:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.211 04:09:28 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:14.211 04:09:28 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:14.211 04:09:28 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.211 04:09:28 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:14.211 04:09:28 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.211 04:09:28 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.212 04:09:28 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.212 04:09:28 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.212 [2024-04-19 04:09:28.738083] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.471 [2024-04-19 04:09:28.754076] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.471 [2024-04-19 04:09:28.754265] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.471 [2024-04-19 04:09:28.783409] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:14.471 malloc0 00:18:14.471 04:09:28 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.471 04:09:28 -- fips/fips.sh@147 -- # bdevperf_pid=3850363 00:18:14.471 04:09:28 -- fips/fips.sh@148 -- # waitforlisten 3850363 /var/tmp/bdevperf.sock 00:18:14.471 04:09:28 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.471 04:09:28 -- common/autotest_common.sh@817 -- # '[' -z 3850363 ']' 00:18:14.471 04:09:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.471 04:09:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:14.471 04:09:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.471 04:09:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:14.471 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:14.471 [2024-04-19 04:09:28.876150] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:14.471 [2024-04-19 04:09:28.876212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850363 ] 00:18:14.471 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.471 [2024-04-19 04:09:28.933597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.730 [2024-04-19 04:09:29.001967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.730 04:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.730 04:09:29 -- common/autotest_common.sh@850 -- # return 0 00:18:14.730 04:09:29 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:14.989 [2024-04-19 04:09:29.317027] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.989 [2024-04-19 04:09:29.317096] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:14.989 TLSTESTn1 00:18:14.989 04:09:29 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.989 Running I/O for 10 seconds... 00:18:27.200 00:18:27.200 Latency(us) 00:18:27.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.200 Verification LBA range: start 0x0 length 0x2000 00:18:27.200 TLSTESTn1 : 10.02 5151.10 20.12 0.00 0.00 24802.31 4766.25 29074.15 00:18:27.200 =================================================================================================================== 00:18:27.200 Total : 5151.10 20.12 0.00 0.00 24802.31 4766.25 29074.15 00:18:27.200 0 00:18:27.200 04:09:39 -- fips/fips.sh@1 -- # cleanup 00:18:27.200 04:09:39 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:27.200 04:09:39 -- common/autotest_common.sh@794 -- # type=--id 00:18:27.200 04:09:39 -- common/autotest_common.sh@795 -- # id=0 00:18:27.200 04:09:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:27.200 04:09:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:27.200 04:09:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:27.200 04:09:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:27.200 04:09:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:27.200 04:09:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:27.200 nvmf_trace.0 00:18:27.200 04:09:39 -- common/autotest_common.sh@809 -- # return 0 00:18:27.200 04:09:39 -- fips/fips.sh@16 -- # killprocess 3850363 00:18:27.200 04:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3850363 ']' 00:18:27.200 04:09:39 -- common/autotest_common.sh@940 -- # kill -0 3850363 00:18:27.200 04:09:39 -- common/autotest_common.sh@941 -- # uname 00:18:27.200 04:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.200 04:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3850363 00:18:27.200 04:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:27.200 04:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:27.200 04:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3850363' 00:18:27.200 killing process with pid 3850363 00:18:27.200 04:09:39 -- common/autotest_common.sh@955 -- # kill 3850363 00:18:27.200 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.200 00:18:27.200 Latency(us) 00:18:27.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.200 =================================================================================================================== 00:18:27.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.200 [2024-04-19 04:09:39.723596] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:27.200 04:09:39 -- common/autotest_common.sh@960 -- # wait 3850363 00:18:27.200 04:09:39 -- fips/fips.sh@17 -- # nvmftestfini 00:18:27.200 04:09:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:27.200 04:09:39 -- nvmf/common.sh@117 -- # sync 00:18:27.200 04:09:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.200 04:09:39 -- nvmf/common.sh@120 -- # set +e 00:18:27.200 04:09:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.201 04:09:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.201 rmmod nvme_tcp 00:18:27.201 rmmod nvme_fabrics 00:18:27.201 rmmod nvme_keyring 00:18:27.201 04:09:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.201 04:09:39 -- nvmf/common.sh@124 -- # set -e 00:18:27.201 04:09:39 -- nvmf/common.sh@125 -- # return 0 00:18:27.201 04:09:39 -- nvmf/common.sh@478 -- # '[' -n 3850075 ']' 00:18:27.201 04:09:39 -- nvmf/common.sh@479 -- # killprocess 3850075 00:18:27.201 04:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3850075 ']' 00:18:27.201 04:09:39 -- common/autotest_common.sh@940 -- # kill -0 3850075 00:18:27.201 04:09:39 -- common/autotest_common.sh@941 -- # uname 00:18:27.201 04:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.201 04:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3850075 00:18:27.201 04:09:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:27.201 04:09:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:27.201 04:09:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3850075' 00:18:27.201 killing process with pid 3850075 00:18:27.201 04:09:40 -- common/autotest_common.sh@955 -- # kill 3850075 00:18:27.201 [2024-04-19 04:09:40.040469] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:27.201 04:09:40 -- common/autotest_common.sh@960 -- # wait 3850075 00:18:27.201 04:09:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:27.201 04:09:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:27.201 04:09:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:27.201 04:09:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.201 04:09:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.201 04:09:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.201 04:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.201 04:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.137 04:09:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.137 04:09:42 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:28.137 00:18:28.137 real 0m20.790s 00:18:28.137 user 0m22.245s 00:18:28.137 sys 0m9.118s 00:18:28.137 04:09:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.137 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.137 ************************************ 00:18:28.137 END TEST nvmf_fips 00:18:28.137 ************************************ 00:18:28.137 04:09:42 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:18:28.137 04:09:42 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:18:28.137 04:09:42 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:18:28.137 04:09:42 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:18:28.137 04:09:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.137 04:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 04:09:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.433 04:09:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.433 04:09:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.433 04:09:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.433 04:09:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.433 04:09:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.433 04:09:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.433 04:09:47 -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.433 04:09:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.433 04:09:47 -- nvmf/common.sh@296 -- # e810=() 00:18:33.433 04:09:47 -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.433 04:09:47 -- nvmf/common.sh@297 -- # x722=() 00:18:33.433 04:09:47 -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.433 04:09:47 -- nvmf/common.sh@298 -- # mlx=() 00:18:33.433 04:09:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.433 04:09:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.433 04:09:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.434 04:09:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.434 04:09:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.434 04:09:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.434 04:09:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.434 04:09:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.434 04:09:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.434 04:09:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.434 04:09:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:33.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:33.434 04:09:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.434 04:09:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:33.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:33.434 04:09:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.434 04:09:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.434 04:09:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.434 04:09:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.434 04:09:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.434 04:09:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.434 04:09:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:33.434 Found net devices under 0000:af:00.0: cvl_0_0 00:18:33.434 04:09:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.434 04:09:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.434 04:09:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.434 04:09:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:33.434 04:09:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.434 04:09:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:33.434 Found net devices under 0000:af:00.1: cvl_0_1 00:18:33.434 04:09:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.434 04:09:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:33.434 04:09:47 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.434 04:09:47 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:18:33.434 04:09:47 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:33.434 04:09:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:33.434 04:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.434 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:33.693 ************************************ 00:18:33.693 START TEST nvmf_perf_adq 00:18:33.693 ************************************ 00:18:33.693 04:09:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:33.693 * Looking for test storage... 00:18:33.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.693 04:09:48 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.693 04:09:48 -- nvmf/common.sh@7 -- # uname -s 00:18:33.693 04:09:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.693 04:09:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.693 04:09:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.693 04:09:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.693 04:09:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.693 04:09:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.693 04:09:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.693 04:09:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.693 04:09:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.693 04:09:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.693 04:09:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.693 04:09:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:18:33.693 04:09:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.693 04:09:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.693 04:09:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.693 04:09:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.693 04:09:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.693 04:09:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.693 04:09:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.693 04:09:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.694 04:09:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.694 04:09:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.694 04:09:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.694 04:09:48 -- paths/export.sh@5 -- # export PATH 00:18:33.694 04:09:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.694 04:09:48 -- nvmf/common.sh@47 -- # : 0 00:18:33.694 04:09:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.694 04:09:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.694 04:09:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.694 04:09:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.694 04:09:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.694 04:09:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.694 04:09:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.694 04:09:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.694 04:09:48 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:33.694 04:09:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.694 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:18:40.300 04:09:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:40.300 04:09:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.300 04:09:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.300 04:09:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.300 04:09:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.300 04:09:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.300 04:09:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.300 04:09:53 -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.300 04:09:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.300 04:09:53 -- nvmf/common.sh@296 -- # e810=() 00:18:40.300 04:09:53 -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.300 04:09:53 -- nvmf/common.sh@297 -- # x722=() 00:18:40.300 04:09:53 -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.300 04:09:53 -- nvmf/common.sh@298 -- # mlx=() 00:18:40.300 04:09:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.300 04:09:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.300 04:09:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.300 04:09:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.300 04:09:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.300 04:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.300 04:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:40.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:40.300 04:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.300 04:09:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:40.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:40.300 04:09:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.300 04:09:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.301 04:09:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.301 04:09:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.301 04:09:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.301 04:09:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.301 04:09:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.301 04:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.301 04:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.301 04:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.301 04:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.301 04:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:40.301 Found net devices under 0000:af:00.0: cvl_0_0 00:18:40.301 04:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.301 04:09:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.301 04:09:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.301 04:09:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:40.301 04:09:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.301 04:09:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:40.301 Found net devices under 0000:af:00.1: cvl_0_1 00:18:40.301 04:09:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.301 04:09:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:40.301 04:09:53 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.301 04:09:53 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:40.301 04:09:53 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:40.301 04:09:53 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:18:40.301 04:09:53 -- target/perf_adq.sh@52 -- # rmmod ice 00:18:40.301 04:09:54 -- target/perf_adq.sh@53 -- # modprobe ice 00:18:42.211 04:09:56 -- target/perf_adq.sh@54 -- # sleep 5 00:18:47.487 04:10:01 -- target/perf_adq.sh@67 -- # nvmftestinit 00:18:47.487 04:10:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:47.487 04:10:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.487 04:10:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:47.487 04:10:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:47.487 04:10:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:47.487 04:10:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.487 04:10:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.487 04:10:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.487 04:10:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:47.487 04:10:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:47.487 04:10:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.487 04:10:01 -- common/autotest_common.sh@10 -- # set +x 00:18:47.487 04:10:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:47.487 04:10:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.487 04:10:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.487 04:10:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.487 04:10:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.487 04:10:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.487 04:10:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.487 04:10:01 -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.487 04:10:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.487 04:10:01 -- nvmf/common.sh@296 -- # e810=() 00:18:47.487 04:10:01 -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.487 04:10:01 -- nvmf/common.sh@297 -- # x722=() 00:18:47.487 04:10:01 -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.487 04:10:01 -- nvmf/common.sh@298 -- # mlx=() 00:18:47.487 04:10:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.487 04:10:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.487 04:10:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.487 04:10:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.487 04:10:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.487 04:10:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.487 04:10:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.487 04:10:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.488 04:10:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:47.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:47.488 04:10:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.488 04:10:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:47.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:47.488 04:10:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.488 04:10:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.488 04:10:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.488 04:10:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:47.488 Found net devices under 0000:af:00.0: cvl_0_0 00:18:47.488 04:10:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.488 04:10:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.488 04:10:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.488 04:10:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.488 04:10:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:47.488 Found net devices under 0000:af:00.1: cvl_0_1 00:18:47.488 04:10:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.488 04:10:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:47.488 04:10:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:47.488 04:10:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:47.488 04:10:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.488 04:10:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.488 04:10:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.488 04:10:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.488 04:10:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.488 04:10:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.488 04:10:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.488 04:10:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.488 04:10:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.488 04:10:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.488 04:10:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.488 04:10:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.488 04:10:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.488 04:10:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.488 04:10:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.488 04:10:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.488 04:10:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.488 04:10:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.488 04:10:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.488 04:10:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:18:47.488 00:18:47.488 --- 10.0.0.2 ping statistics --- 00:18:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.488 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:47.488 04:10:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:18:47.488 00:18:47.488 --- 10.0.0.1 ping statistics --- 00:18:47.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.488 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:47.488 04:10:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.488 04:10:02 -- nvmf/common.sh@411 -- # return 0 00:18:47.488 04:10:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:47.488 04:10:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.488 04:10:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:47.488 04:10:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:47.747 04:10:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.747 04:10:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:47.747 04:10:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:47.747 04:10:02 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:47.747 04:10:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:47.747 04:10:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:47.747 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.747 04:10:02 -- nvmf/common.sh@470 -- # nvmfpid=3860600 00:18:47.747 04:10:02 -- nvmf/common.sh@471 -- # waitforlisten 3860600 00:18:47.747 04:10:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:47.747 04:10:02 -- common/autotest_common.sh@817 -- # '[' -z 3860600 ']' 00:18:47.747 04:10:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.747 04:10:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.747 04:10:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.747 04:10:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.747 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.747 [2024-04-19 04:10:02.103288] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:18:47.747 [2024-04-19 04:10:02.103356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.747 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.747 [2024-04-19 04:10:02.191963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.006 [2024-04-19 04:10:02.282906] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.006 [2024-04-19 04:10:02.282947] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.006 [2024-04-19 04:10:02.282958] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.006 [2024-04-19 04:10:02.282966] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.006 [2024-04-19 04:10:02.282976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.006 [2024-04-19 04:10:02.283207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.006 [2024-04-19 04:10:02.283289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.006 [2024-04-19 04:10:02.283397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.006 [2024-04-19 04:10:02.283399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.571 04:10:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.571 04:10:03 -- common/autotest_common.sh@850 -- # return 0 00:18:48.572 04:10:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:48.572 04:10:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:48.572 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.572 04:10:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.572 04:10:03 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:18:48.572 04:10:03 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:48.572 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.572 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.572 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.572 04:10:03 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:18:48.572 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.572 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:48.829 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.829 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 [2024-04-19 04:10:03.183750] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.829 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.829 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 Malloc1 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.829 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.829 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.829 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.829 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.829 04:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.829 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.829 [2024-04-19 04:10:03.239409] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.829 04:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.829 04:10:03 -- target/perf_adq.sh@73 -- # perfpid=3860880 00:18:48.829 04:10:03 -- target/perf_adq.sh@74 -- # sleep 2 00:18:48.829 04:10:03 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:48.829 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.730 04:10:05 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:18:50.730 04:10:05 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:50.730 04:10:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.730 04:10:05 -- target/perf_adq.sh@76 -- # wc -l 00:18:50.730 04:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.989 04:10:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.989 04:10:05 -- target/perf_adq.sh@76 -- # count=4 00:18:50.989 04:10:05 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:18:50.989 04:10:05 -- target/perf_adq.sh@81 -- # wait 3860880 00:18:59.104 Initializing NVMe Controllers 00:18:59.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:59.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:59.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:59.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:59.104 Initialization complete. Launching workers. 00:18:59.104 ======================================================== 00:18:59.104 Latency(us) 00:18:59.104 Device Information : IOPS MiB/s Average min max 00:18:59.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7710.20 30.12 8309.58 4037.66 44889.59 00:18:59.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9925.77 38.77 6449.75 1683.44 11112.90 00:18:59.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7724.70 30.17 8288.27 6953.63 10942.24 00:18:59.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7724.70 30.17 8286.11 7192.12 10983.36 00:18:59.104 ======================================================== 00:18:59.104 Total : 33085.38 129.24 7741.17 1683.44 44889.59 00:18:59.104 00:18:59.104 04:10:13 -- target/perf_adq.sh@82 -- # nvmftestfini 00:18:59.104 04:10:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:59.104 04:10:13 -- nvmf/common.sh@117 -- # sync 00:18:59.104 04:10:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.104 04:10:13 -- nvmf/common.sh@120 -- # set +e 00:18:59.104 04:10:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.104 04:10:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.104 rmmod nvme_tcp 00:18:59.104 rmmod nvme_fabrics 00:18:59.104 rmmod nvme_keyring 00:18:59.104 04:10:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.104 04:10:13 -- nvmf/common.sh@124 -- # set -e 00:18:59.104 04:10:13 -- nvmf/common.sh@125 -- # return 0 00:18:59.104 04:10:13 -- nvmf/common.sh@478 -- # '[' -n 3860600 ']' 00:18:59.104 04:10:13 -- nvmf/common.sh@479 -- # killprocess 3860600 00:18:59.104 04:10:13 -- common/autotest_common.sh@936 -- # '[' -z 3860600 ']' 00:18:59.104 04:10:13 -- common/autotest_common.sh@940 -- # kill -0 3860600 00:18:59.104 04:10:13 -- common/autotest_common.sh@941 -- # uname 00:18:59.104 04:10:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.104 04:10:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3860600 00:18:59.104 04:10:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.104 04:10:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.104 04:10:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3860600' 00:18:59.104 killing process with pid 3860600 00:18:59.104 04:10:13 -- common/autotest_common.sh@955 -- # kill 3860600 00:18:59.104 04:10:13 -- common/autotest_common.sh@960 -- # wait 3860600 00:18:59.363 04:10:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:59.363 04:10:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:59.363 04:10:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:59.363 04:10:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.363 04:10:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.363 04:10:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.363 04:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.363 04:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.897 04:10:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.897 04:10:15 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:01.897 04:10:15 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:02.496 04:10:16 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:05.039 04:10:19 -- target/perf_adq.sh@54 -- # sleep 5 00:19:10.313 04:10:24 -- target/perf_adq.sh@87 -- # nvmftestinit 00:19:10.313 04:10:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.313 04:10:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.313 04:10:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.313 04:10:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.313 04:10:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.313 04:10:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.313 04:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.313 04:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.313 04:10:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:10.313 04:10:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:10.313 04:10:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.313 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:19:10.313 04:10:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:10.313 04:10:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.313 04:10:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.313 04:10:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.313 04:10:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.313 04:10:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.313 04:10:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.313 04:10:24 -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.313 04:10:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.313 04:10:24 -- nvmf/common.sh@296 -- # e810=() 00:19:10.313 04:10:24 -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.313 04:10:24 -- nvmf/common.sh@297 -- # x722=() 00:19:10.313 04:10:24 -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.313 04:10:24 -- nvmf/common.sh@298 -- # mlx=() 00:19:10.313 04:10:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.313 04:10:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.313 04:10:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.314 04:10:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.314 04:10:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.314 04:10:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.314 04:10:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.314 04:10:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.314 04:10:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:10.314 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:10.314 04:10:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.314 04:10:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:10.314 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:10.314 04:10:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.314 04:10:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.314 04:10:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.314 04:10:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:10.314 Found net devices under 0000:af:00.0: cvl_0_0 00:19:10.314 04:10:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.314 04:10:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.314 04:10:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.314 04:10:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:10.314 Found net devices under 0000:af:00.1: cvl_0_1 00:19:10.314 04:10:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:10.314 04:10:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:10.314 04:10:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.314 04:10:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.314 04:10:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.314 04:10:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.314 04:10:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.314 04:10:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.314 04:10:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.314 04:10:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.314 04:10:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.314 04:10:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.314 04:10:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.314 04:10:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.314 04:10:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.314 04:10:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.314 04:10:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.314 04:10:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.314 04:10:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.314 04:10:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.314 04:10:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:19:10.314 00:19:10.314 --- 10.0.0.2 ping statistics --- 00:19:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.314 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:19:10.314 04:10:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:19:10.314 00:19:10.314 --- 10.0.0.1 ping statistics --- 00:19:10.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.314 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:10.314 04:10:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.314 04:10:24 -- nvmf/common.sh@411 -- # return 0 00:19:10.314 04:10:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:10.314 04:10:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.314 04:10:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:10.314 04:10:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.314 04:10:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:10.314 04:10:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:10.314 04:10:24 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:19:10.314 04:10:24 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:10.314 04:10:24 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:10.314 04:10:24 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:10.314 net.core.busy_poll = 1 00:19:10.314 04:10:24 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:10.314 net.core.busy_read = 1 00:19:10.314 04:10:24 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:10.314 04:10:24 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:10.314 04:10:24 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:10.314 04:10:24 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:10.314 04:10:24 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:10.314 04:10:24 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:10.315 04:10:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:10.315 04:10:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:10.315 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:19:10.315 04:10:24 -- nvmf/common.sh@470 -- # nvmfpid=3865039 00:19:10.315 04:10:24 -- nvmf/common.sh@471 -- # waitforlisten 3865039 00:19:10.315 04:10:24 -- common/autotest_common.sh@817 -- # '[' -z 3865039 ']' 00:19:10.315 04:10:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.315 04:10:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.315 04:10:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:10.315 04:10:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.315 04:10:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.315 04:10:24 -- common/autotest_common.sh@10 -- # set +x 00:19:10.315 [2024-04-19 04:10:24.594680] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:10.315 [2024-04-19 04:10:24.594737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.315 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.315 [2024-04-19 04:10:24.679136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.315 [2024-04-19 04:10:24.769984] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.315 [2024-04-19 04:10:24.770029] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.315 [2024-04-19 04:10:24.770039] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.315 [2024-04-19 04:10:24.770047] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.315 [2024-04-19 04:10:24.770054] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.315 [2024-04-19 04:10:24.770111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.315 [2024-04-19 04:10:24.770210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.315 [2024-04-19 04:10:24.770237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.315 [2024-04-19 04:10:24.770239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.251 04:10:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:11.251 04:10:25 -- common/autotest_common.sh@850 -- # return 0 00:19:11.251 04:10:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:11.251 04:10:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 04:10:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.251 04:10:25 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:19:11.251 04:10:25 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 [2024-04-19 04:10:25.671287] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 Malloc1 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.251 04:10:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.251 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.251 [2024-04-19 04:10:25.722909] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.251 04:10:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.251 04:10:25 -- target/perf_adq.sh@94 -- # perfpid=3865206 00:19:11.251 04:10:25 -- target/perf_adq.sh@95 -- # sleep 2 00:19:11.251 04:10:25 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:11.251 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.782 04:10:27 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:19:13.782 04:10:27 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:13.782 04:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.782 04:10:27 -- target/perf_adq.sh@97 -- # wc -l 00:19:13.782 04:10:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.782 04:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.782 04:10:27 -- target/perf_adq.sh@97 -- # count=3 00:19:13.782 04:10:27 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:19:13.782 04:10:27 -- target/perf_adq.sh@103 -- # wait 3865206 00:19:21.892 Initializing NVMe Controllers 00:19:21.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:21.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:21.892 Initialization complete. Launching workers. 00:19:21.892 ======================================================== 00:19:21.892 Latency(us) 00:19:21.892 Device Information : IOPS MiB/s Average min max 00:19:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4067.80 15.89 15734.88 2804.64 63629.38 00:19:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3808.60 14.88 16812.36 2225.35 64975.25 00:19:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 3953.10 15.44 16191.17 2222.33 63774.36 00:19:21.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3726.90 14.56 17210.95 3146.23 65702.88 00:19:21.892 ======================================================== 00:19:21.892 Total : 15556.40 60.77 16468.25 2222.33 65702.88 00:19:21.892 00:19:21.892 04:10:35 -- target/perf_adq.sh@104 -- # nvmftestfini 00:19:21.892 04:10:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:21.892 04:10:35 -- nvmf/common.sh@117 -- # sync 00:19:21.892 04:10:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.892 04:10:35 -- nvmf/common.sh@120 -- # set +e 00:19:21.892 04:10:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.892 04:10:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.892 rmmod nvme_tcp 00:19:21.892 rmmod nvme_fabrics 00:19:21.892 rmmod nvme_keyring 00:19:21.892 04:10:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.892 04:10:35 -- nvmf/common.sh@124 -- # set -e 00:19:21.892 04:10:35 -- nvmf/common.sh@125 -- # return 0 00:19:21.892 04:10:35 -- nvmf/common.sh@478 -- # '[' -n 3865039 ']' 00:19:21.892 04:10:35 -- nvmf/common.sh@479 -- # killprocess 3865039 00:19:21.892 04:10:35 -- common/autotest_common.sh@936 -- # '[' -z 3865039 ']' 00:19:21.892 04:10:35 -- common/autotest_common.sh@940 -- # kill -0 3865039 00:19:21.892 04:10:35 -- common/autotest_common.sh@941 -- # uname 00:19:21.892 04:10:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.892 04:10:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3865039 00:19:21.892 04:10:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:21.892 04:10:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:21.892 04:10:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3865039' 00:19:21.892 killing process with pid 3865039 00:19:21.892 04:10:35 -- common/autotest_common.sh@955 -- # kill 3865039 00:19:21.892 04:10:35 -- common/autotest_common.sh@960 -- # wait 3865039 00:19:21.892 04:10:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:21.892 04:10:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:21.892 04:10:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:21.892 04:10:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.892 04:10:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.892 04:10:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.892 04:10:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.892 04:10:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.799 04:10:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.799 04:10:38 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:19:23.799 00:19:23.799 real 0m50.330s 00:19:23.799 user 2m50.784s 00:19:23.799 sys 0m9.285s 00:19:23.799 04:10:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:23.799 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:23.799 ************************************ 00:19:23.799 END TEST nvmf_perf_adq 00:19:23.799 ************************************ 00:19:24.058 04:10:38 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:24.058 04:10:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:24.058 04:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:24.058 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:24.058 ************************************ 00:19:24.058 START TEST nvmf_shutdown 00:19:24.058 ************************************ 00:19:24.058 04:10:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:24.058 * Looking for test storage... 00:19:24.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.058 04:10:38 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.058 04:10:38 -- nvmf/common.sh@7 -- # uname -s 00:19:24.058 04:10:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.058 04:10:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.058 04:10:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.058 04:10:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.058 04:10:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.058 04:10:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.058 04:10:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.058 04:10:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.058 04:10:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.316 04:10:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.316 04:10:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.316 04:10:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:24.316 04:10:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.316 04:10:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.316 04:10:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.316 04:10:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.316 04:10:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.316 04:10:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.316 04:10:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.316 04:10:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.316 04:10:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.316 04:10:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.316 04:10:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.316 04:10:38 -- paths/export.sh@5 -- # export PATH 00:19:24.316 04:10:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.316 04:10:38 -- nvmf/common.sh@47 -- # : 0 00:19:24.316 04:10:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.316 04:10:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.316 04:10:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.316 04:10:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.316 04:10:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.316 04:10:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.316 04:10:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.316 04:10:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.316 04:10:38 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:24.316 04:10:38 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:24.316 04:10:38 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:24.316 04:10:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:24.316 04:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:24.316 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:24.316 ************************************ 00:19:24.316 START TEST nvmf_shutdown_tc1 00:19:24.316 ************************************ 00:19:24.316 04:10:38 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:19:24.316 04:10:38 -- target/shutdown.sh@74 -- # starttarget 00:19:24.316 04:10:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:24.317 04:10:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:24.317 04:10:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.317 04:10:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:24.317 04:10:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:24.317 04:10:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:24.317 04:10:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.317 04:10:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.317 04:10:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.317 04:10:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:24.317 04:10:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:24.317 04:10:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.317 04:10:38 -- common/autotest_common.sh@10 -- # set +x 00:19:29.674 04:10:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:29.674 04:10:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.674 04:10:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.674 04:10:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.674 04:10:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.674 04:10:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.674 04:10:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.674 04:10:44 -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.674 04:10:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.674 04:10:44 -- nvmf/common.sh@296 -- # e810=() 00:19:29.674 04:10:44 -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.674 04:10:44 -- nvmf/common.sh@297 -- # x722=() 00:19:29.674 04:10:44 -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.674 04:10:44 -- nvmf/common.sh@298 -- # mlx=() 00:19:29.674 04:10:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.674 04:10:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.674 04:10:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.674 04:10:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.674 04:10:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.674 04:10:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.674 04:10:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:29.674 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:29.674 04:10:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.674 04:10:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:29.674 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:29.674 04:10:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.674 04:10:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.674 04:10:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.675 04:10:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.675 04:10:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.675 04:10:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:29.934 04:10:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.934 04:10:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:29.934 Found net devices under 0000:af:00.0: cvl_0_0 00:19:29.934 04:10:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.934 04:10:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.934 04:10:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.934 04:10:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:29.934 04:10:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.934 04:10:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:29.934 Found net devices under 0000:af:00.1: cvl_0_1 00:19:29.934 04:10:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.934 04:10:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:29.934 04:10:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:29.934 04:10:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:29.934 04:10:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:29.934 04:10:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:29.934 04:10:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.934 04:10:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.934 04:10:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.934 04:10:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.934 04:10:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.934 04:10:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.934 04:10:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.934 04:10:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.934 04:10:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.934 04:10:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.934 04:10:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.934 04:10:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.934 04:10:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.934 04:10:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.934 04:10:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.934 04:10:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.934 04:10:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.934 04:10:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.934 04:10:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.934 04:10:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:19:29.934 00:19:29.934 --- 10.0.0.2 ping statistics --- 00:19:29.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.934 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:19:29.934 04:10:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:30.194 00:19:30.194 --- 10.0.0.1 ping statistics --- 00:19:30.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.194 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:30.194 04:10:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.194 04:10:44 -- nvmf/common.sh@411 -- # return 0 00:19:30.194 04:10:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:30.194 04:10:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.194 04:10:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:30.194 04:10:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:30.194 04:10:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.194 04:10:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:30.194 04:10:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:30.194 04:10:44 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:30.194 04:10:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:30.194 04:10:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.194 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.194 04:10:44 -- nvmf/common.sh@470 -- # nvmfpid=3870875 00:19:30.194 04:10:44 -- nvmf/common.sh@471 -- # waitforlisten 3870875 00:19:30.194 04:10:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:30.194 04:10:44 -- common/autotest_common.sh@817 -- # '[' -z 3870875 ']' 00:19:30.194 04:10:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.194 04:10:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.194 04:10:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.194 04:10:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.194 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.194 [2024-04-19 04:10:44.558086] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:30.194 [2024-04-19 04:10:44.558140] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.194 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.194 [2024-04-19 04:10:44.637323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.482 [2024-04-19 04:10:44.724505] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.482 [2024-04-19 04:10:44.724547] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.482 [2024-04-19 04:10:44.724558] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.482 [2024-04-19 04:10:44.724568] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.482 [2024-04-19 04:10:44.724575] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.482 [2024-04-19 04:10:44.724677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.482 [2024-04-19 04:10:44.724715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.482 [2024-04-19 04:10:44.724827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.482 [2024-04-19 04:10:44.724827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:30.482 04:10:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:30.482 04:10:44 -- common/autotest_common.sh@850 -- # return 0 00:19:30.482 04:10:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:30.482 04:10:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:30.482 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.482 04:10:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.482 04:10:44 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:30.482 04:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.482 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.482 [2024-04-19 04:10:44.882705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.482 04:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.482 04:10:44 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:30.482 04:10:44 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:30.482 04:10:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.482 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.482 04:10:44 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:30.482 04:10:44 -- target/shutdown.sh@28 -- # cat 00:19:30.482 04:10:44 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:30.482 04:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.482 04:10:44 -- common/autotest_common.sh@10 -- # set +x 00:19:30.482 Malloc1 00:19:30.482 [2024-04-19 04:10:44.982674] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.756 Malloc2 00:19:30.756 Malloc3 00:19:30.756 Malloc4 00:19:30.756 Malloc5 00:19:30.756 Malloc6 00:19:30.756 Malloc7 00:19:30.756 Malloc8 00:19:31.014 Malloc9 00:19:31.014 Malloc10 00:19:31.014 04:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.014 04:10:45 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:31.014 04:10:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.014 04:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:31.014 04:10:45 -- target/shutdown.sh@78 -- # perfpid=3871085 00:19:31.014 04:10:45 -- target/shutdown.sh@79 -- # waitforlisten 3871085 /var/tmp/bdevperf.sock 00:19:31.014 04:10:45 -- common/autotest_common.sh@817 -- # '[' -z 3871085 ']' 00:19:31.014 04:10:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.014 04:10:45 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:31.014 04:10:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.014 04:10:45 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:31.014 04:10:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.014 04:10:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.014 04:10:45 -- nvmf/common.sh@521 -- # config=() 00:19:31.014 04:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:31.014 04:10:45 -- nvmf/common.sh@521 -- # local subsystem config 00:19:31.014 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 [2024-04-19 04:10:45.466406] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:31.015 [2024-04-19 04:10:45.466464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.015 )") 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.015 04:10:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.015 04:10:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.015 { 00:19:31.015 "params": { 00:19:31.015 "name": "Nvme$subsystem", 00:19:31.015 "trtype": "$TEST_TRANSPORT", 00:19:31.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.015 "adrfam": "ipv4", 00:19:31.015 "trsvcid": "$NVMF_PORT", 00:19:31.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.015 "hdgst": ${hdgst:-false}, 00:19:31.015 "ddgst": ${ddgst:-false} 00:19:31.015 }, 00:19:31.015 "method": "bdev_nvme_attach_controller" 00:19:31.015 } 00:19:31.015 EOF 00:19:31.016 )") 00:19:31.016 04:10:45 -- nvmf/common.sh@543 -- # cat 00:19:31.016 04:10:45 -- nvmf/common.sh@545 -- # jq . 00:19:31.016 04:10:45 -- nvmf/common.sh@546 -- # IFS=, 00:19:31.016 04:10:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme1", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme2", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme3", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme4", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme5", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme6", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme7", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme8", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme9", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 },{ 00:19:31.016 "params": { 00:19:31.016 "name": "Nvme10", 00:19:31.016 "trtype": "tcp", 00:19:31.016 "traddr": "10.0.0.2", 00:19:31.016 "adrfam": "ipv4", 00:19:31.016 "trsvcid": "4420", 00:19:31.016 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:31.016 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:31.016 "hdgst": false, 00:19:31.016 "ddgst": false 00:19:31.016 }, 00:19:31.016 "method": "bdev_nvme_attach_controller" 00:19:31.016 }' 00:19:31.016 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.274 [2024-04-19 04:10:45.547307] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.274 [2024-04-19 04:10:45.632080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.648 04:10:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:32.648 04:10:46 -- common/autotest_common.sh@850 -- # return 0 00:19:32.648 04:10:46 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:32.648 04:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.648 04:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:32.648 04:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.648 04:10:46 -- target/shutdown.sh@83 -- # kill -9 3871085 00:19:32.648 04:10:46 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:32.648 04:10:46 -- target/shutdown.sh@87 -- # sleep 1 00:19:33.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3871085 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:33.585 04:10:47 -- target/shutdown.sh@88 -- # kill -0 3870875 00:19:33.585 04:10:47 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:33.585 04:10:47 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:33.585 04:10:47 -- nvmf/common.sh@521 -- # config=() 00:19:33.585 04:10:47 -- nvmf/common.sh@521 -- # local subsystem config 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 [2024-04-19 04:10:47.963033] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:33.585 [2024-04-19 04:10:47.963096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3871481 ] 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.585 "adrfam": "ipv4", 00:19:33.585 "trsvcid": "$NVMF_PORT", 00:19:33.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.585 "hdgst": ${hdgst:-false}, 00:19:33.585 "ddgst": ${ddgst:-false} 00:19:33.585 }, 00:19:33.585 "method": "bdev_nvme_attach_controller" 00:19:33.585 } 00:19:33.585 EOF 00:19:33.585 )") 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.585 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.585 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.585 { 00:19:33.585 "params": { 00:19:33.585 "name": "Nvme$subsystem", 00:19:33.585 "trtype": "$TEST_TRANSPORT", 00:19:33.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.586 04:10:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.586 04:10:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 04:10:47 -- nvmf/common.sh@543 -- # cat 00:19:33.586 04:10:47 -- nvmf/common.sh@545 -- # jq . 00:19:33.586 04:10:47 -- nvmf/common.sh@546 -- # IFS=, 00:19:33.586 04:10:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme1", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme2", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme3", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme4", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme5", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme6", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme7", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme8", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme9", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 },{ 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme10", 00:19:33.586 "trtype": "tcp", 00:19:33.586 "traddr": "10.0.0.2", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "4420", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:33.586 "hdgst": false, 00:19:33.586 "ddgst": false 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 }' 00:19:33.586 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.586 [2024-04-19 04:10:48.047089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.845 [2024-04-19 04:10:48.134094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.219 Running I/O for 1 seconds... 00:19:36.597 00:19:36.597 Latency(us) 00:19:36.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.597 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme1n1 : 1.03 185.89 11.62 0.00 0.00 339773.75 21686.46 308853.29 00:19:36.597 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme2n1 : 1.18 162.18 10.14 0.00 0.00 382167.66 24307.90 310759.80 00:19:36.597 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme3n1 : 1.21 210.90 13.18 0.00 0.00 288186.88 26095.24 305040.29 00:19:36.597 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme4n1 : 1.22 209.78 13.11 0.00 0.00 283808.58 21567.30 303133.79 00:19:36.597 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme5n1 : 1.14 179.05 11.19 0.00 0.00 314628.58 14120.03 306946.79 00:19:36.597 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme6n1 : 1.23 207.68 12.98 0.00 0.00 274985.89 25022.84 297414.28 00:19:36.597 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme7n1 : 1.24 207.08 12.94 0.00 0.00 268781.38 17158.52 299320.79 00:19:36.597 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme8n1 : 1.23 208.37 13.02 0.00 0.00 262239.88 48139.17 274536.26 00:19:36.597 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme9n1 : 1.24 209.37 13.09 0.00 0.00 255594.47 1854.37 318385.80 00:19:36.597 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:36.597 Verification LBA range: start 0x0 length 0x400 00:19:36.597 Nvme10n1 : 1.25 205.02 12.81 0.00 0.00 254998.23 14000.87 341263.83 00:19:36.597 =================================================================================================================== 00:19:36.597 Total : 1985.33 124.08 0.00 0.00 288296.88 1854.37 341263.83 00:19:36.597 04:10:51 -- target/shutdown.sh@94 -- # stoptarget 00:19:36.597 04:10:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:36.597 04:10:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:36.597 04:10:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:36.597 04:10:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:36.597 04:10:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:36.597 04:10:51 -- nvmf/common.sh@117 -- # sync 00:19:36.856 04:10:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.856 04:10:51 -- nvmf/common.sh@120 -- # set +e 00:19:36.856 04:10:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.856 04:10:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.856 rmmod nvme_tcp 00:19:36.856 rmmod nvme_fabrics 00:19:36.856 rmmod nvme_keyring 00:19:36.856 04:10:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.856 04:10:51 -- nvmf/common.sh@124 -- # set -e 00:19:36.856 04:10:51 -- nvmf/common.sh@125 -- # return 0 00:19:36.856 04:10:51 -- nvmf/common.sh@478 -- # '[' -n 3870875 ']' 00:19:36.856 04:10:51 -- nvmf/common.sh@479 -- # killprocess 3870875 00:19:36.856 04:10:51 -- common/autotest_common.sh@936 -- # '[' -z 3870875 ']' 00:19:36.856 04:10:51 -- common/autotest_common.sh@940 -- # kill -0 3870875 00:19:36.856 04:10:51 -- common/autotest_common.sh@941 -- # uname 00:19:36.856 04:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:36.856 04:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3870875 00:19:36.856 04:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:36.856 04:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:36.856 04:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3870875' 00:19:36.856 killing process with pid 3870875 00:19:36.856 04:10:51 -- common/autotest_common.sh@955 -- # kill 3870875 00:19:36.856 04:10:51 -- common/autotest_common.sh@960 -- # wait 3870875 00:19:37.425 04:10:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:37.425 04:10:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:37.425 04:10:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:37.425 04:10:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.425 04:10:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.425 04:10:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.425 04:10:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.425 04:10:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.344 04:10:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.344 00:19:39.344 real 0m15.014s 00:19:39.344 user 0m33.365s 00:19:39.344 sys 0m5.666s 00:19:39.344 04:10:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:39.344 04:10:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.344 ************************************ 00:19:39.344 END TEST nvmf_shutdown_tc1 00:19:39.344 ************************************ 00:19:39.344 04:10:53 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:39.344 04:10:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:39.344 04:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:39.344 04:10:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.603 ************************************ 00:19:39.603 START TEST nvmf_shutdown_tc2 00:19:39.603 ************************************ 00:19:39.603 04:10:53 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:19:39.603 04:10:53 -- target/shutdown.sh@99 -- # starttarget 00:19:39.603 04:10:53 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:39.603 04:10:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:39.603 04:10:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.603 04:10:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:39.603 04:10:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:39.603 04:10:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:39.603 04:10:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.603 04:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.603 04:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.603 04:10:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:39.603 04:10:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.603 04:10:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.603 04:10:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:39.603 04:10:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.603 04:10:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.603 04:10:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.603 04:10:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.603 04:10:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.603 04:10:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.603 04:10:53 -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.603 04:10:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.603 04:10:53 -- nvmf/common.sh@296 -- # e810=() 00:19:39.603 04:10:53 -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.603 04:10:53 -- nvmf/common.sh@297 -- # x722=() 00:19:39.603 04:10:53 -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.603 04:10:53 -- nvmf/common.sh@298 -- # mlx=() 00:19:39.603 04:10:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.603 04:10:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.603 04:10:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.603 04:10:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:39.603 04:10:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.603 04:10:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.603 04:10:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:39.603 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:39.603 04:10:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.603 04:10:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:39.603 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:39.603 04:10:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.603 04:10:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:39.603 04:10:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.603 04:10:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.603 04:10:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:39.603 04:10:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.603 04:10:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:39.603 Found net devices under 0000:af:00.0: cvl_0_0 00:19:39.603 04:10:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.603 04:10:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.603 04:10:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.604 04:10:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:39.604 04:10:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.604 04:10:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:39.604 Found net devices under 0000:af:00.1: cvl_0_1 00:19:39.604 04:10:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.604 04:10:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:39.604 04:10:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:39.604 04:10:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:39.604 04:10:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:39.604 04:10:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:39.604 04:10:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.604 04:10:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.604 04:10:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.604 04:10:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.604 04:10:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.604 04:10:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.604 04:10:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.604 04:10:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.604 04:10:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.604 04:10:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.604 04:10:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.604 04:10:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.604 04:10:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.604 04:10:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.604 04:10:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.604 04:10:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.604 04:10:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.863 04:10:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.863 04:10:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.863 04:10:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:19:39.863 00:19:39.863 --- 10.0.0.2 ping statistics --- 00:19:39.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.863 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:39.863 04:10:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:39.863 00:19:39.863 --- 10.0.0.1 ping statistics --- 00:19:39.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.863 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:39.863 04:10:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.863 04:10:54 -- nvmf/common.sh@411 -- # return 0 00:19:39.863 04:10:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:39.863 04:10:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.863 04:10:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:39.863 04:10:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:39.863 04:10:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.863 04:10:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:39.863 04:10:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:39.863 04:10:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:39.863 04:10:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:39.863 04:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:39.863 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:39.863 04:10:54 -- nvmf/common.sh@470 -- # nvmfpid=3872744 00:19:39.863 04:10:54 -- nvmf/common.sh@471 -- # waitforlisten 3872744 00:19:39.863 04:10:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:39.863 04:10:54 -- common/autotest_common.sh@817 -- # '[' -z 3872744 ']' 00:19:39.863 04:10:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.863 04:10:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:39.863 04:10:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.863 04:10:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:39.863 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:39.863 [2024-04-19 04:10:54.277392] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:39.863 [2024-04-19 04:10:54.277430] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.863 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.863 [2024-04-19 04:10:54.343453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.121 [2024-04-19 04:10:54.428958] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.121 [2024-04-19 04:10:54.429004] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.121 [2024-04-19 04:10:54.429015] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.121 [2024-04-19 04:10:54.429024] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.121 [2024-04-19 04:10:54.429032] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.121 [2024-04-19 04:10:54.429138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.121 [2024-04-19 04:10:54.429253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.121 [2024-04-19 04:10:54.429381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.121 [2024-04-19 04:10:54.429381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.121 04:10:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.121 04:10:54 -- common/autotest_common.sh@850 -- # return 0 00:19:40.121 04:10:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:40.121 04:10:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.121 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 04:10:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.121 04:10:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.121 04:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.121 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:40.121 [2024-04-19 04:10:54.594726] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.122 04:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.122 04:10:54 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:40.122 04:10:54 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:40.122 04:10:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:40.122 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:40.122 04:10:54 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.122 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.122 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.378 04:10:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.378 04:10:54 -- target/shutdown.sh@28 -- # cat 00:19:40.378 04:10:54 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:40.378 04:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.378 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:19:40.378 Malloc1 00:19:40.378 [2024-04-19 04:10:54.694858] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.378 Malloc2 00:19:40.378 Malloc3 00:19:40.378 Malloc4 00:19:40.378 Malloc5 00:19:40.378 Malloc6 00:19:40.636 Malloc7 00:19:40.636 Malloc8 00:19:40.636 Malloc9 00:19:40.636 Malloc10 00:19:40.636 04:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.636 04:10:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:40.636 04:10:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.636 04:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:40.636 04:10:55 -- target/shutdown.sh@103 -- # perfpid=3872944 00:19:40.636 04:10:55 -- target/shutdown.sh@104 -- # waitforlisten 3872944 /var/tmp/bdevperf.sock 00:19:40.636 04:10:55 -- common/autotest_common.sh@817 -- # '[' -z 3872944 ']' 00:19:40.636 04:10:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.636 04:10:55 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:40.636 04:10:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.636 04:10:55 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.636 04:10:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.636 04:10:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.636 04:10:55 -- nvmf/common.sh@521 -- # config=() 00:19:40.636 04:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:40.636 04:10:55 -- nvmf/common.sh@521 -- # local subsystem config 00:19:40.636 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.636 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.636 { 00:19:40.636 "params": { 00:19:40.636 "name": "Nvme$subsystem", 00:19:40.636 "trtype": "$TEST_TRANSPORT", 00:19:40.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.636 "adrfam": "ipv4", 00:19:40.636 "trsvcid": "$NVMF_PORT", 00:19:40.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.636 "hdgst": ${hdgst:-false}, 00:19:40.636 "ddgst": ${ddgst:-false} 00:19:40.636 }, 00:19:40.636 "method": "bdev_nvme_attach_controller" 00:19:40.636 } 00:19:40.636 EOF 00:19:40.636 )") 00:19:40.636 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.636 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.636 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.636 { 00:19:40.636 "params": { 00:19:40.636 "name": "Nvme$subsystem", 00:19:40.636 "trtype": "$TEST_TRANSPORT", 00:19:40.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.636 "adrfam": "ipv4", 00:19:40.637 "trsvcid": "$NVMF_PORT", 00:19:40.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.637 "hdgst": ${hdgst:-false}, 00:19:40.637 "ddgst": ${ddgst:-false} 00:19:40.637 }, 00:19:40.637 "method": "bdev_nvme_attach_controller" 00:19:40.637 } 00:19:40.637 EOF 00:19:40.637 )") 00:19:40.637 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.637 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.637 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.637 { 00:19:40.637 "params": { 00:19:40.637 "name": "Nvme$subsystem", 00:19:40.637 "trtype": "$TEST_TRANSPORT", 00:19:40.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.637 "adrfam": "ipv4", 00:19:40.637 "trsvcid": "$NVMF_PORT", 00:19:40.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.637 "hdgst": ${hdgst:-false}, 00:19:40.637 "ddgst": ${ddgst:-false} 00:19:40.637 }, 00:19:40.637 "method": "bdev_nvme_attach_controller" 00:19:40.637 } 00:19:40.637 EOF 00:19:40.637 )") 00:19:40.637 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.637 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.637 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.637 { 00:19:40.637 "params": { 00:19:40.637 "name": "Nvme$subsystem", 00:19:40.637 "trtype": "$TEST_TRANSPORT", 00:19:40.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.637 "adrfam": "ipv4", 00:19:40.637 "trsvcid": "$NVMF_PORT", 00:19:40.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.637 "hdgst": ${hdgst:-false}, 00:19:40.637 "ddgst": ${ddgst:-false} 00:19:40.637 }, 00:19:40.637 "method": "bdev_nvme_attach_controller" 00:19:40.637 } 00:19:40.637 EOF 00:19:40.637 )") 00:19:40.637 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.895 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.895 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.895 { 00:19:40.895 "params": { 00:19:40.895 "name": "Nvme$subsystem", 00:19:40.895 "trtype": "$TEST_TRANSPORT", 00:19:40.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.895 "adrfam": "ipv4", 00:19:40.895 "trsvcid": "$NVMF_PORT", 00:19:40.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.895 "hdgst": ${hdgst:-false}, 00:19:40.895 "ddgst": ${ddgst:-false} 00:19:40.895 }, 00:19:40.895 "method": "bdev_nvme_attach_controller" 00:19:40.895 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.896 { 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme$subsystem", 00:19:40.896 "trtype": "$TEST_TRANSPORT", 00:19:40.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "$NVMF_PORT", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.896 "hdgst": ${hdgst:-false}, 00:19:40.896 "ddgst": ${ddgst:-false} 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.896 { 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme$subsystem", 00:19:40.896 "trtype": "$TEST_TRANSPORT", 00:19:40.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "$NVMF_PORT", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.896 "hdgst": ${hdgst:-false}, 00:19:40.896 "ddgst": ${ddgst:-false} 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 [2024-04-19 04:10:55.180058] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:40.896 [2024-04-19 04:10:55.180113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872944 ] 00:19:40.896 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.896 { 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme$subsystem", 00:19:40.896 "trtype": "$TEST_TRANSPORT", 00:19:40.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "$NVMF_PORT", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.896 "hdgst": ${hdgst:-false}, 00:19:40.896 "ddgst": ${ddgst:-false} 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.896 { 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme$subsystem", 00:19:40.896 "trtype": "$TEST_TRANSPORT", 00:19:40.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "$NVMF_PORT", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.896 "hdgst": ${hdgst:-false}, 00:19:40.896 "ddgst": ${ddgst:-false} 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 04:10:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.896 { 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme$subsystem", 00:19:40.896 "trtype": "$TEST_TRANSPORT", 00:19:40.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "$NVMF_PORT", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.896 "hdgst": ${hdgst:-false}, 00:19:40.896 "ddgst": ${ddgst:-false} 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 } 00:19:40.896 EOF 00:19:40.896 )") 00:19:40.896 04:10:55 -- nvmf/common.sh@543 -- # cat 00:19:40.896 04:10:55 -- nvmf/common.sh@545 -- # jq . 00:19:40.896 04:10:55 -- nvmf/common.sh@546 -- # IFS=, 00:19:40.896 04:10:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme1", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme2", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme3", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme4", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme5", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme6", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.896 "hdgst": false, 00:19:40.896 "ddgst": false 00:19:40.896 }, 00:19:40.896 "method": "bdev_nvme_attach_controller" 00:19:40.896 },{ 00:19:40.896 "params": { 00:19:40.896 "name": "Nvme7", 00:19:40.896 "trtype": "tcp", 00:19:40.896 "traddr": "10.0.0.2", 00:19:40.896 "adrfam": "ipv4", 00:19:40.896 "trsvcid": "4420", 00:19:40.896 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.896 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.897 "hdgst": false, 00:19:40.897 "ddgst": false 00:19:40.897 }, 00:19:40.897 "method": "bdev_nvme_attach_controller" 00:19:40.897 },{ 00:19:40.897 "params": { 00:19:40.897 "name": "Nvme8", 00:19:40.897 "trtype": "tcp", 00:19:40.897 "traddr": "10.0.0.2", 00:19:40.897 "adrfam": "ipv4", 00:19:40.897 "trsvcid": "4420", 00:19:40.897 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.897 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.897 "hdgst": false, 00:19:40.897 "ddgst": false 00:19:40.897 }, 00:19:40.897 "method": "bdev_nvme_attach_controller" 00:19:40.897 },{ 00:19:40.897 "params": { 00:19:40.897 "name": "Nvme9", 00:19:40.897 "trtype": "tcp", 00:19:40.897 "traddr": "10.0.0.2", 00:19:40.897 "adrfam": "ipv4", 00:19:40.897 "trsvcid": "4420", 00:19:40.897 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.897 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.897 "hdgst": false, 00:19:40.897 "ddgst": false 00:19:40.897 }, 00:19:40.897 "method": "bdev_nvme_attach_controller" 00:19:40.897 },{ 00:19:40.897 "params": { 00:19:40.897 "name": "Nvme10", 00:19:40.897 "trtype": "tcp", 00:19:40.897 "traddr": "10.0.0.2", 00:19:40.897 "adrfam": "ipv4", 00:19:40.897 "trsvcid": "4420", 00:19:40.897 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.897 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.897 "hdgst": false, 00:19:40.897 "ddgst": false 00:19:40.897 }, 00:19:40.897 "method": "bdev_nvme_attach_controller" 00:19:40.897 }' 00:19:40.897 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.897 [2024-04-19 04:10:55.261268] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.897 [2024-04-19 04:10:55.347087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.270 Running I/O for 10 seconds... 00:19:42.837 04:10:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.838 04:10:57 -- common/autotest_common.sh@850 -- # return 0 00:19:42.838 04:10:57 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:42.838 04:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.838 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:42.838 04:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.838 04:10:57 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:42.838 04:10:57 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:42.838 04:10:57 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:42.838 04:10:57 -- target/shutdown.sh@57 -- # local ret=1 00:19:42.838 04:10:57 -- target/shutdown.sh@58 -- # local i 00:19:42.838 04:10:57 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:42.838 04:10:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:42.838 04:10:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:42.838 04:10:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:42.838 04:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.838 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:42.838 04:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.838 04:10:57 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:42.838 04:10:57 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:42.838 04:10:57 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:43.096 04:10:57 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:43.096 04:10:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:43.096 04:10:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:43.096 04:10:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:43.096 04:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.096 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:43.096 04:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.096 04:10:57 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:43.096 04:10:57 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:43.096 04:10:57 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:43.355 04:10:57 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:43.355 04:10:57 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:43.355 04:10:57 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:43.355 04:10:57 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:43.355 04:10:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.355 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:19:43.355 04:10:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.355 04:10:57 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:43.355 04:10:57 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:43.355 04:10:57 -- target/shutdown.sh@64 -- # ret=0 00:19:43.355 04:10:57 -- target/shutdown.sh@65 -- # break 00:19:43.355 04:10:57 -- target/shutdown.sh@69 -- # return 0 00:19:43.355 04:10:57 -- target/shutdown.sh@110 -- # killprocess 3872944 00:19:43.355 04:10:57 -- common/autotest_common.sh@936 -- # '[' -z 3872944 ']' 00:19:43.355 04:10:57 -- common/autotest_common.sh@940 -- # kill -0 3872944 00:19:43.355 04:10:57 -- common/autotest_common.sh@941 -- # uname 00:19:43.355 04:10:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:43.355 04:10:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3872944 00:19:43.614 04:10:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:43.614 04:10:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:43.614 04:10:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3872944' 00:19:43.614 killing process with pid 3872944 00:19:43.614 04:10:57 -- common/autotest_common.sh@955 -- # kill 3872944 00:19:43.614 04:10:57 -- common/autotest_common.sh@960 -- # wait 3872944 00:19:43.614 Received shutdown signal, test time was about 1.209950 seconds 00:19:43.614 00:19:43.614 Latency(us) 00:19:43.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme1n1 : 1.17 163.85 10.24 0.00 0.00 386156.61 50283.99 285975.27 00:19:43.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme2n1 : 1.20 212.94 13.31 0.00 0.00 291056.87 23592.96 299320.79 00:19:43.614 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme3n1 : 1.19 219.80 13.74 0.00 0.00 274919.47 6434.44 289788.28 00:19:43.614 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme4n1 : 1.20 213.61 13.35 0.00 0.00 278333.21 20137.43 308853.29 00:19:43.614 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme5n1 : 1.21 212.23 13.26 0.00 0.00 274521.60 16681.89 299320.79 00:19:43.614 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme6n1 : 1.17 163.47 10.22 0.00 0.00 347790.12 26333.56 306946.79 00:19:43.614 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme7n1 : 1.17 164.67 10.29 0.00 0.00 336680.34 23116.33 305040.29 00:19:43.614 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme8n1 : 1.21 211.75 13.23 0.00 0.00 257566.72 18588.39 312666.30 00:19:43.614 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme9n1 : 1.19 214.95 13.43 0.00 0.00 247124.71 18826.71 320292.31 00:19:43.614 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:43.614 Verification LBA range: start 0x0 length 0x400 00:19:43.614 Nvme10n1 : 1.18 162.09 10.13 0.00 0.00 319594.74 22639.71 346983.33 00:19:43.614 =================================================================================================================== 00:19:43.614 Total : 1939.37 121.21 0.00 0.00 296197.04 6434.44 346983.33 00:19:43.872 04:10:58 -- target/shutdown.sh@113 -- # sleep 1 00:19:44.822 04:10:59 -- target/shutdown.sh@114 -- # kill -0 3872744 00:19:44.822 04:10:59 -- target/shutdown.sh@116 -- # stoptarget 00:19:44.822 04:10:59 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:44.822 04:10:59 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:44.822 04:10:59 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:44.822 04:10:59 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:44.822 04:10:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:44.822 04:10:59 -- nvmf/common.sh@117 -- # sync 00:19:44.822 04:10:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.822 04:10:59 -- nvmf/common.sh@120 -- # set +e 00:19:44.822 04:10:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.822 04:10:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.822 rmmod nvme_tcp 00:19:44.822 rmmod nvme_fabrics 00:19:44.822 rmmod nvme_keyring 00:19:44.822 04:10:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.822 04:10:59 -- nvmf/common.sh@124 -- # set -e 00:19:44.822 04:10:59 -- nvmf/common.sh@125 -- # return 0 00:19:44.822 04:10:59 -- nvmf/common.sh@478 -- # '[' -n 3872744 ']' 00:19:44.822 04:10:59 -- nvmf/common.sh@479 -- # killprocess 3872744 00:19:44.822 04:10:59 -- common/autotest_common.sh@936 -- # '[' -z 3872744 ']' 00:19:44.822 04:10:59 -- common/autotest_common.sh@940 -- # kill -0 3872744 00:19:44.822 04:10:59 -- common/autotest_common.sh@941 -- # uname 00:19:44.822 04:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:44.822 04:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3872744 00:19:45.081 04:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:45.081 04:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:45.081 04:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3872744' 00:19:45.081 killing process with pid 3872744 00:19:45.081 04:10:59 -- common/autotest_common.sh@955 -- # kill 3872744 00:19:45.081 04:10:59 -- common/autotest_common.sh@960 -- # wait 3872744 00:19:45.340 04:10:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:45.341 04:10:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:45.341 04:10:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:45.341 04:10:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.341 04:10:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.341 04:10:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.341 04:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.341 04:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.909 04:11:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:47.909 00:19:47.909 real 0m7.965s 00:19:47.909 user 0m24.334s 00:19:47.909 sys 0m1.392s 00:19:47.909 04:11:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:47.909 04:11:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.909 ************************************ 00:19:47.910 END TEST nvmf_shutdown_tc2 00:19:47.910 ************************************ 00:19:47.910 04:11:01 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:47.910 04:11:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:47.910 04:11:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.910 04:11:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 ************************************ 00:19:47.910 START TEST nvmf_shutdown_tc3 00:19:47.910 ************************************ 00:19:47.910 04:11:02 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:19:47.910 04:11:02 -- target/shutdown.sh@121 -- # starttarget 00:19:47.910 04:11:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:47.910 04:11:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:47.910 04:11:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.910 04:11:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:47.910 04:11:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:47.910 04:11:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:47.910 04:11:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.910 04:11:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.910 04:11:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.910 04:11:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:47.910 04:11:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.910 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 04:11:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:47.910 04:11:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.910 04:11:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.910 04:11:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.910 04:11:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.910 04:11:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.910 04:11:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.910 04:11:02 -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.910 04:11:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.910 04:11:02 -- nvmf/common.sh@296 -- # e810=() 00:19:47.910 04:11:02 -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.910 04:11:02 -- nvmf/common.sh@297 -- # x722=() 00:19:47.910 04:11:02 -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.910 04:11:02 -- nvmf/common.sh@298 -- # mlx=() 00:19:47.910 04:11:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.910 04:11:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.910 04:11:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.910 04:11:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:47.910 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:47.910 04:11:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.910 04:11:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:47.910 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:47.910 04:11:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.910 04:11:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.910 04:11:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.910 04:11:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:47.910 Found net devices under 0000:af:00.0: cvl_0_0 00:19:47.910 04:11:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.910 04:11:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.910 04:11:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.910 04:11:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:47.910 Found net devices under 0000:af:00.1: cvl_0_1 00:19:47.910 04:11:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:47.910 04:11:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:47.910 04:11:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.910 04:11:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.910 04:11:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.910 04:11:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.910 04:11:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.910 04:11:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.910 04:11:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.910 04:11:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.910 04:11:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.910 04:11:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.910 04:11:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.910 04:11:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.910 04:11:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.910 04:11:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.910 04:11:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.910 04:11:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.910 04:11:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.910 04:11:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.910 04:11:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:19:47.910 00:19:47.910 --- 10.0.0.2 ping statistics --- 00:19:47.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.910 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:19:47.910 04:11:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:19:47.910 00:19:47.910 --- 10.0.0.1 ping statistics --- 00:19:47.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.910 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:47.910 04:11:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.910 04:11:02 -- nvmf/common.sh@411 -- # return 0 00:19:47.910 04:11:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:47.910 04:11:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.910 04:11:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:47.910 04:11:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.910 04:11:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:47.910 04:11:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:47.910 04:11:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:47.910 04:11:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:47.910 04:11:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:47.910 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:47.910 04:11:02 -- nvmf/common.sh@470 -- # nvmfpid=3874388 00:19:47.911 04:11:02 -- nvmf/common.sh@471 -- # waitforlisten 3874388 00:19:47.911 04:11:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:47.911 04:11:02 -- common/autotest_common.sh@817 -- # '[' -z 3874388 ']' 00:19:47.911 04:11:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.911 04:11:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.911 04:11:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.911 04:11:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.911 04:11:02 -- common/autotest_common.sh@10 -- # set +x 00:19:48.169 [2024-04-19 04:11:02.451035] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:48.169 [2024-04-19 04:11:02.451091] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.169 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.169 [2024-04-19 04:11:02.529874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.169 [2024-04-19 04:11:02.619678] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.169 [2024-04-19 04:11:02.619722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.169 [2024-04-19 04:11:02.619732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.169 [2024-04-19 04:11:02.619741] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.169 [2024-04-19 04:11:02.619748] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.169 [2024-04-19 04:11:02.619853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.169 [2024-04-19 04:11:02.619969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.169 [2024-04-19 04:11:02.620058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:48.169 [2024-04-19 04:11:02.620058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.102 04:11:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.102 04:11:03 -- common/autotest_common.sh@850 -- # return 0 00:19:49.102 04:11:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:49.102 04:11:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:49.102 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.102 04:11:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.102 04:11:03 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.102 04:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.102 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.102 [2024-04-19 04:11:03.348590] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.102 04:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.102 04:11:03 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:49.102 04:11:03 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:49.102 04:11:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:49.102 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.102 04:11:03 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.102 04:11:03 -- target/shutdown.sh@28 -- # cat 00:19:49.102 04:11:03 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:49.102 04:11:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.102 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.102 Malloc1 00:19:49.102 [2024-04-19 04:11:03.448707] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.102 Malloc2 00:19:49.102 Malloc3 00:19:49.102 Malloc4 00:19:49.102 Malloc5 00:19:49.360 Malloc6 00:19:49.360 Malloc7 00:19:49.360 Malloc8 00:19:49.360 Malloc9 00:19:49.360 Malloc10 00:19:49.360 04:11:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.360 04:11:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:49.360 04:11:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:49.360 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.618 04:11:03 -- target/shutdown.sh@125 -- # perfpid=3874698 00:19:49.618 04:11:03 -- target/shutdown.sh@126 -- # waitforlisten 3874698 /var/tmp/bdevperf.sock 00:19:49.618 04:11:03 -- common/autotest_common.sh@817 -- # '[' -z 3874698 ']' 00:19:49.618 04:11:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.618 04:11:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:49.618 04:11:03 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:49.618 04:11:03 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.618 04:11:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.618 04:11:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:49.618 04:11:03 -- nvmf/common.sh@521 -- # config=() 00:19:49.618 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.618 04:11:03 -- nvmf/common.sh@521 -- # local subsystem config 00:19:49.618 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.618 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.618 { 00:19:49.618 "params": { 00:19:49.618 "name": "Nvme$subsystem", 00:19:49.618 "trtype": "$TEST_TRANSPORT", 00:19:49.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.618 "adrfam": "ipv4", 00:19:49.618 "trsvcid": "$NVMF_PORT", 00:19:49.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.618 "hdgst": ${hdgst:-false}, 00:19:49.618 "ddgst": ${ddgst:-false} 00:19:49.618 }, 00:19:49.618 "method": "bdev_nvme_attach_controller" 00:19:49.618 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 [2024-04-19 04:11:03.937301] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:19:49.619 [2024-04-19 04:11:03.937362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874698 ] 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:49.619 { 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme$subsystem", 00:19:49.619 "trtype": "$TEST_TRANSPORT", 00:19:49.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "$NVMF_PORT", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.619 "hdgst": ${hdgst:-false}, 00:19:49.619 "ddgst": ${ddgst:-false} 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 } 00:19:49.619 EOF 00:19:49.619 )") 00:19:49.619 04:11:03 -- nvmf/common.sh@543 -- # cat 00:19:49.619 04:11:03 -- nvmf/common.sh@545 -- # jq . 00:19:49.619 04:11:03 -- nvmf/common.sh@546 -- # IFS=, 00:19:49.619 04:11:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme1", 00:19:49.619 "trtype": "tcp", 00:19:49.619 "traddr": "10.0.0.2", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "4420", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.619 "hdgst": false, 00:19:49.619 "ddgst": false 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 },{ 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme2", 00:19:49.619 "trtype": "tcp", 00:19:49.619 "traddr": "10.0.0.2", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "4420", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.619 "hdgst": false, 00:19:49.619 "ddgst": false 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 },{ 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme3", 00:19:49.619 "trtype": "tcp", 00:19:49.619 "traddr": "10.0.0.2", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "4420", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.619 "hdgst": false, 00:19:49.619 "ddgst": false 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 },{ 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme4", 00:19:49.619 "trtype": "tcp", 00:19:49.619 "traddr": "10.0.0.2", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "4420", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.619 "hdgst": false, 00:19:49.619 "ddgst": false 00:19:49.619 }, 00:19:49.619 "method": "bdev_nvme_attach_controller" 00:19:49.619 },{ 00:19:49.619 "params": { 00:19:49.619 "name": "Nvme5", 00:19:49.619 "trtype": "tcp", 00:19:49.619 "traddr": "10.0.0.2", 00:19:49.619 "adrfam": "ipv4", 00:19:49.619 "trsvcid": "4420", 00:19:49.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 },{ 00:19:49.620 "params": { 00:19:49.620 "name": "Nvme6", 00:19:49.620 "trtype": "tcp", 00:19:49.620 "traddr": "10.0.0.2", 00:19:49.620 "adrfam": "ipv4", 00:19:49.620 "trsvcid": "4420", 00:19:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 },{ 00:19:49.620 "params": { 00:19:49.620 "name": "Nvme7", 00:19:49.620 "trtype": "tcp", 00:19:49.620 "traddr": "10.0.0.2", 00:19:49.620 "adrfam": "ipv4", 00:19:49.620 "trsvcid": "4420", 00:19:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 },{ 00:19:49.620 "params": { 00:19:49.620 "name": "Nvme8", 00:19:49.620 "trtype": "tcp", 00:19:49.620 "traddr": "10.0.0.2", 00:19:49.620 "adrfam": "ipv4", 00:19:49.620 "trsvcid": "4420", 00:19:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 },{ 00:19:49.620 "params": { 00:19:49.620 "name": "Nvme9", 00:19:49.620 "trtype": "tcp", 00:19:49.620 "traddr": "10.0.0.2", 00:19:49.620 "adrfam": "ipv4", 00:19:49.620 "trsvcid": "4420", 00:19:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 },{ 00:19:49.620 "params": { 00:19:49.620 "name": "Nvme10", 00:19:49.620 "trtype": "tcp", 00:19:49.620 "traddr": "10.0.0.2", 00:19:49.620 "adrfam": "ipv4", 00:19:49.620 "trsvcid": "4420", 00:19:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.620 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.620 "hdgst": false, 00:19:49.620 "ddgst": false 00:19:49.620 }, 00:19:49.620 "method": "bdev_nvme_attach_controller" 00:19:49.620 }' 00:19:49.620 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.620 [2024-04-19 04:11:04.018569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.620 [2024-04-19 04:11:04.102869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.517 Running I/O for 10 seconds... 00:19:51.517 04:11:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:51.517 04:11:05 -- common/autotest_common.sh@850 -- # return 0 00:19:51.517 04:11:05 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:51.517 04:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.517 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 04:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.517 04:11:05 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.517 04:11:05 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:51.517 04:11:05 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:51.517 04:11:05 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:51.517 04:11:05 -- target/shutdown.sh@57 -- # local ret=1 00:19:51.517 04:11:05 -- target/shutdown.sh@58 -- # local i 00:19:51.517 04:11:05 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:51.517 04:11:05 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:51.517 04:11:05 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.517 04:11:05 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.517 04:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.517 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 04:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.517 04:11:05 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:51.517 04:11:05 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:51.517 04:11:05 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:51.775 04:11:06 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:51.775 04:11:06 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:51.776 04:11:06 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:51.776 04:11:06 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:51.776 04:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.776 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:19:51.776 04:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.776 04:11:06 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:51.776 04:11:06 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:51.776 04:11:06 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:52.033 04:11:06 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:52.033 04:11:06 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.033 04:11:06 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.033 04:11:06 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.033 04:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.033 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:19:52.301 04:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.301 04:11:06 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:52.301 04:11:06 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:52.301 04:11:06 -- target/shutdown.sh@64 -- # ret=0 00:19:52.301 04:11:06 -- target/shutdown.sh@65 -- # break 00:19:52.301 04:11:06 -- target/shutdown.sh@69 -- # return 0 00:19:52.301 04:11:06 -- target/shutdown.sh@135 -- # killprocess 3874388 00:19:52.301 04:11:06 -- common/autotest_common.sh@936 -- # '[' -z 3874388 ']' 00:19:52.301 04:11:06 -- common/autotest_common.sh@940 -- # kill -0 3874388 00:19:52.301 04:11:06 -- common/autotest_common.sh@941 -- # uname 00:19:52.301 04:11:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.301 04:11:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3874388 00:19:52.301 04:11:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:52.301 04:11:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:52.301 04:11:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3874388' 00:19:52.301 killing process with pid 3874388 00:19:52.301 04:11:06 -- common/autotest_common.sh@955 -- # kill 3874388 00:19:52.301 04:11:06 -- common/autotest_common.sh@960 -- # wait 3874388 00:19:52.301 [2024-04-19 04:11:06.630205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.301 [2024-04-19 04:11:06.630428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.630609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc350 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.631992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.302 [2024-04-19 04:11:06.632392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.632508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718cd0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.639997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.640267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc7e0 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.303 [2024-04-19 04:11:06.641999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.642418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fcc70 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.643999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.304 [2024-04-19 04:11:06.644062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.644285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717150 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.305 [2024-04-19 04:11:06.645362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.645409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17175e0 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.306 [2024-04-19 04:11:06.646695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.646704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.646712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717a90 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.647614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103c210 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.647766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103c650 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.647884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.647965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10613e0 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.647995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc500 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.648109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfea70 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.648224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014910 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.648337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.307 [2024-04-19 04:11:06.648416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.307 [2024-04-19 04:11:06.648428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da910 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.651992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.652000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.307 [2024-04-19 04:11:06.652009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.652440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1717f20 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.308 [2024-04-19 04:11:06.653742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.653750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.653759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17183b0 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.654879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1718840 is same with the state(5) to be set 00:19:52.309 [2024-04-19 04:11:06.655840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.655893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.655917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.655940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.655963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.655985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.655995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.656007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.656017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.656029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.656038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.656051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.309 [2024-04-19 04:11:06.656061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.309 [2024-04-19 04:11:06.656073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.310 [2024-04-19 04:11:06.656806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.310 [2024-04-19 04:11:06.656818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.656984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.656997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.657328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:52.311 [2024-04-19 04:11:06.657397] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100c9f0 was disconnected and freed. reset controller. 00:19:52.311 [2024-04-19 04:11:06.663224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.311 [2024-04-19 04:11:06.663571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.311 [2024-04-19 04:11:06.663583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.663984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.663995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.312 [2024-04-19 04:11:06.664406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.312 [2024-04-19 04:11:06.664418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.664698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664791] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x100dd20 was disconnected and freed. reset controller. 00:19:52.313 [2024-04-19 04:11:06.664872] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:52.313 [2024-04-19 04:11:06.664929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086d30 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.664960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.664972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.664983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.664994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10205f0 is same with the state(5) to be set 00:19:52.313 [2024-04-19 04:11:06.665069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103c210 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103c650 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10613e0 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc500 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfea70 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014910 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11da910 (9): Bad file descriptor 00:19:52.313 [2024-04-19 04:11:06.665231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.313 [2024-04-19 04:11:06.665308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055020 is same with the state(5) to be set 00:19:52.313 [2024-04-19 04:11:06.665443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.313 [2024-04-19 04:11:06.665738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.313 [2024-04-19 04:11:06.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.665989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.665999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.314 [2024-04-19 04:11:06.666621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.314 [2024-04-19 04:11:06.666634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.666916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.666996] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11ba190 was disconnected and freed. reset controller. 00:19:52.315 [2024-04-19 04:11:06.668710] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.315 [2024-04-19 04:11:06.668797] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.315 [2024-04-19 04:11:06.670254] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.315 [2024-04-19 04:11:06.670311] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.315 [2024-04-19 04:11:06.670370] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.315 [2024-04-19 04:11:06.670799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:52.315 [2024-04-19 04:11:06.670821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:52.315 [2024-04-19 04:11:06.670848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1055020 (9): Bad file descriptor 00:19:52.315 [2024-04-19 04:11:06.671008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.315 [2024-04-19 04:11:06.671206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.315 [2024-04-19 04:11:06.671222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086d30 with addr=10.0.0.2, port=4420 00:19:52.315 [2024-04-19 04:11:06.671233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086d30 is same with the state(5) to be set 00:19:52.315 [2024-04-19 04:11:06.671269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.315 [2024-04-19 04:11:06.671525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.315 [2024-04-19 04:11:06.671536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.671979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.671991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.316 [2024-04-19 04:11:06.672428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.316 [2024-04-19 04:11:06.672439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b8f00 is same with the state(5) to be set 00:19:52.317 [2024-04-19 04:11:06.672860] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b8f00 was disconnected and freed. reset controller. 00:19:52.317 [2024-04-19 04:11:06.672910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.672985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.672995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.317 [2024-04-19 04:11:06.673437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.317 [2024-04-19 04:11:06.673447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.318 [2024-04-19 04:11:06.674304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.318 [2024-04-19 04:11:06.674318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.674328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.674347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.674359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.674372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.674383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.674395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.674405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.674416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11509e0 is same with the state(5) to be set 00:19:52.319 [2024-04-19 04:11:06.674475] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11509e0 was disconnected and freed. reset controller. 00:19:52.319 [2024-04-19 04:11:06.674566] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:52.319 [2024-04-19 04:11:06.675127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.319 [2024-04-19 04:11:06.675373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.319 [2024-04-19 04:11:06.675390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc500 with addr=10.0.0.2, port=4420 00:19:52.319 [2024-04-19 04:11:06.675401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc500 is same with the state(5) to be set 00:19:52.319 [2024-04-19 04:11:06.675427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086d30 (9): Bad file descriptor 00:19:52.319 [2024-04-19 04:11:06.675453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10205f0 (9): Bad file descriptor 00:19:52.319 [2024-04-19 04:11:06.675498] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.319 [2024-04-19 04:11:06.675513] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.319 [2024-04-19 04:11:06.678662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.319 [2024-04-19 04:11:06.678690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:52.319 [2024-04-19 04:11:06.678904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.319 [2024-04-19 04:11:06.679155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.319 [2024-04-19 04:11:06.679171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1055020 with addr=10.0.0.2, port=4420 00:19:52.319 [2024-04-19 04:11:06.679181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055020 is same with the state(5) to be set 00:19:52.319 [2024-04-19 04:11:06.679195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc500 (9): Bad file descriptor 00:19:52.319 [2024-04-19 04:11:06.679207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:52.319 [2024-04-19 04:11:06.679217] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:52.319 [2024-04-19 04:11:06.679227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:52.319 [2024-04-19 04:11:06.679313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.319 [2024-04-19 04:11:06.679900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.319 [2024-04-19 04:11:06.679914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.679924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.679938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.679949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.679962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.679972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.679985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.320 [2024-04-19 04:11:06.680683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.320 [2024-04-19 04:11:06.680695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.680705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.680717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.680739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.680750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.680763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.680774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.680786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.680800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.680811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151d50 is same with the state(5) to be set 00:19:52.321 [2024-04-19 04:11:06.682264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.682981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.682991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.683004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.683015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.683038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.683051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.321 [2024-04-19 04:11:06.683074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.321 [2024-04-19 04:11:06.683084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.683759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.683770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153200 is same with the state(5) to be set 00:19:52.322 [2024-04-19 04:11:06.685238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.322 [2024-04-19 04:11:06.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.322 [2024-04-19 04:11:06.685427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.685983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.685996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.323 [2024-04-19 04:11:06.686205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.323 [2024-04-19 04:11:06.686215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.686743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.686755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a090 is same with the state(5) to be set 00:19:52.324 [2024-04-19 04:11:06.688244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.324 [2024-04-19 04:11:06.688458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.324 [2024-04-19 04:11:06.688468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.688981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.688994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.325 [2024-04-19 04:11:06.689192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.325 [2024-04-19 04:11:06.689204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.326 [2024-04-19 04:11:06.689753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.326 [2024-04-19 04:11:06.689764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b18f0 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.692231] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.326 [2024-04-19 04:11:06.692259] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:52.326 [2024-04-19 04:11:06.692275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:52.326 [2024-04-19 04:11:06.692291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:52.326 [2024-04-19 04:11:06.692527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.692744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.692761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfea70 with addr=10.0.0.2, port=4420 00:19:52.326 [2024-04-19 04:11:06.692772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfea70 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.693036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.693252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.693267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1014910 with addr=10.0.0.2, port=4420 00:19:52.326 [2024-04-19 04:11:06.693279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014910 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.693293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1055020 (9): Bad file descriptor 00:19:52.326 [2024-04-19 04:11:06.693305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.326 [2024-04-19 04:11:06.693315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:52.326 [2024-04-19 04:11:06.693330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.326 [2024-04-19 04:11:06.693380] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.326 [2024-04-19 04:11:06.693406] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.326 [2024-04-19 04:11:06.693428] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.326 [2024-04-19 04:11:06.693444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014910 (9): Bad file descriptor 00:19:52.326 [2024-04-19 04:11:06.693460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfea70 (9): Bad file descriptor 00:19:52.326 [2024-04-19 04:11:06.694181] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:52.326 [2024-04-19 04:11:06.694201] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.326 [2024-04-19 04:11:06.694505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.694771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.694787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103c650 with addr=10.0.0.2, port=4420 00:19:52.326 [2024-04-19 04:11:06.694798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103c650 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.694960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.695130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.695145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11da910 with addr=10.0.0.2, port=4420 00:19:52.326 [2024-04-19 04:11:06.695155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da910 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.695390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.695582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.326 [2024-04-19 04:11:06.695598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10613e0 with addr=10.0.0.2, port=4420 00:19:52.326 [2024-04-19 04:11:06.695609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10613e0 is same with the state(5) to be set 00:19:52.326 [2024-04-19 04:11:06.695623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:52.327 [2024-04-19 04:11:06.695632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:52.327 [2024-04-19 04:11:06.695643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:52.327 [2024-04-19 04:11:06.696680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.696982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.696996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.327 [2024-04-19 04:11:06.697480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.327 [2024-04-19 04:11:06.697490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.697983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:52.328 [2024-04-19 04:11:06.698152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.328 [2024-04-19 04:11:06.698163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100b540 is same with the state(5) to be set 00:19:52.328 [2024-04-19 04:11:06.700769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:52.328 [2024-04-19 04:11:06.700794] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.328 task offset: 24576 on job bdev=Nvme8n1 fails 00:19:52.328 00:19:52.328 Latency(us) 00:19:52.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.328 Job: Nvme1n1 ended in about 1.08 seconds with error 00:19:52.328 Verification LBA range: start 0x0 length 0x400 00:19:52.328 Nvme1n1 : 1.08 122.84 7.68 59.11 0.00 347417.01 19660.80 306946.79 00:19:52.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.328 Job: Nvme2n1 ended in about 1.08 seconds with error 00:19:52.328 Verification LBA range: start 0x0 length 0x400 00:19:52.328 Nvme2n1 : 1.08 137.55 8.60 59.48 0.00 313626.65 7536.64 314572.80 00:19:52.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.328 Job: Nvme3n1 ended in about 1.08 seconds with error 00:19:52.328 Verification LBA range: start 0x0 length 0x400 00:19:52.328 Nvme3n1 : 1.08 177.06 11.07 59.02 0.00 255918.55 25976.09 314572.80 00:19:52.328 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.328 Job: Nvme4n1 ended in about 1.09 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme4n1 : 1.09 121.33 7.58 58.83 0.00 327844.14 20018.27 305040.29 00:19:52.329 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme5n1 ended in about 1.09 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme5n1 : 1.09 117.33 7.33 58.67 0.00 327797.92 16681.89 308853.29 00:19:52.329 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme6n1 ended in about 1.09 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme6n1 : 1.09 117.01 7.31 58.51 0.00 320956.66 24069.59 329824.81 00:19:52.329 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme7n1 ended in about 1.11 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme7n1 : 1.11 115.81 7.24 57.90 0.00 316885.80 44087.85 285975.27 00:19:52.329 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme8n1 ended in about 1.07 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme8n1 : 1.07 179.60 11.23 59.87 0.00 222663.56 15013.70 306946.79 00:19:52.329 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme9n1 ended in about 1.07 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme9n1 : 1.07 178.69 11.17 59.56 0.00 218136.09 10187.87 297414.28 00:19:52.329 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.329 Job: Nvme10n1 ended in about 1.10 seconds with error 00:19:52.329 Verification LBA range: start 0x0 length 0x400 00:19:52.329 Nvme10n1 : 1.10 116.69 7.29 58.35 0.00 290613.06 34078.72 341263.83 00:19:52.329 =================================================================================================================== 00:19:52.329 Total : 1383.92 86.50 589.29 0.00 288999.52 7536.64 341263.83 00:19:52.329 [2024-04-19 04:11:06.729490] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:52.329 [2024-04-19 04:11:06.729531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:52.329 [2024-04-19 04:11:06.729872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.730149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.730166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x103c210 with addr=10.0.0.2, port=4420 00:19:52.329 [2024-04-19 04:11:06.730179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103c210 is same with the state(5) to be set 00:19:52.329 [2024-04-19 04:11:06.730198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103c650 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.730212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11da910 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.730232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10613e0 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.730244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.730254] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.730264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.329 [2024-04-19 04:11:06.730282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.730291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.730301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:52.329 [2024-04-19 04:11:06.730458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.730473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.730705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.730972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.730987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1086d30 with addr=10.0.0.2, port=4420 00:19:52.329 [2024-04-19 04:11:06.730998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1086d30 is same with the state(5) to be set 00:19:52.329 [2024-04-19 04:11:06.731263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.731548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.329 [2024-04-19 04:11:06.731564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10205f0 with addr=10.0.0.2, port=4420 00:19:52.329 [2024-04-19 04:11:06.731575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10205f0 is same with the state(5) to be set 00:19:52.329 [2024-04-19 04:11:06.731590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103c210 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.731602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.731611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.731620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:52.329 [2024-04-19 04:11:06.731635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.731645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.731654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:52.329 [2024-04-19 04:11:06.731668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.731677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.731687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:52.329 [2024-04-19 04:11:06.731736] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.329 [2024-04-19 04:11:06.731752] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.329 [2024-04-19 04:11:06.731765] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.329 [2024-04-19 04:11:06.731784] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:52.329 [2024-04-19 04:11:06.732185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.732199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.732207] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.732234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1086d30 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.732248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10205f0 (9): Bad file descriptor 00:19:52.329 [2024-04-19 04:11:06.732260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.732268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.732278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:52.329 [2024-04-19 04:11:06.732328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:52.329 [2024-04-19 04:11:06.732350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:52.329 [2024-04-19 04:11:06.732363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:52.329 [2024-04-19 04:11:06.732374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.329 [2024-04-19 04:11:06.732386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.329 [2024-04-19 04:11:06.732422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.732432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.732442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:52.329 [2024-04-19 04:11:06.732455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:52.329 [2024-04-19 04:11:06.732464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:52.329 [2024-04-19 04:11:06.732473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:52.329 [2024-04-19 04:11:06.732512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.330 [2024-04-19 04:11:06.732523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.330 [2024-04-19 04:11:06.732800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc500 with addr=10.0.0.2, port=4420 00:19:52.330 [2024-04-19 04:11:06.733085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc500 is same with the state(5) to be set 00:19:52.330 [2024-04-19 04:11:06.733338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1055020 with addr=10.0.0.2, port=4420 00:19:52.330 [2024-04-19 04:11:06.733610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1055020 is same with the state(5) to be set 00:19:52.330 [2024-04-19 04:11:06.733777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.733977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1014910 with addr=10.0.0.2, port=4420 00:19:52.330 [2024-04-19 04:11:06.733987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1014910 is same with the state(5) to be set 00:19:52.330 [2024-04-19 04:11:06.734221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.734409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:52.330 [2024-04-19 04:11:06.734426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfea70 with addr=10.0.0.2, port=4420 00:19:52.330 [2024-04-19 04:11:06.734437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfea70 is same with the state(5) to be set 00:19:52.330 [2024-04-19 04:11:06.734477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc500 (9): Bad file descriptor 00:19:52.330 [2024-04-19 04:11:06.734491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1055020 (9): Bad file descriptor 00:19:52.330 [2024-04-19 04:11:06.734505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1014910 (9): Bad file descriptor 00:19:52.330 [2024-04-19 04:11:06.734517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfea70 (9): Bad file descriptor 00:19:52.330 [2024-04-19 04:11:06.734560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.330 [2024-04-19 04:11:06.734573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:52.330 [2024-04-19 04:11:06.734583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.330 [2024-04-19 04:11:06.734595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:52.330 [2024-04-19 04:11:06.734606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:52.330 [2024-04-19 04:11:06.734615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:52.330 [2024-04-19 04:11:06.734627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:52.330 [2024-04-19 04:11:06.734636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:52.330 [2024-04-19 04:11:06.734645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:52.330 [2024-04-19 04:11:06.734658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.330 [2024-04-19 04:11:06.734666] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.330 [2024-04-19 04:11:06.734675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.330 [2024-04-19 04:11:06.734710] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.330 [2024-04-19 04:11:06.734720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.330 [2024-04-19 04:11:06.734729] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:52.330 [2024-04-19 04:11:06.734737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.003 04:11:07 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:53.003 04:11:07 -- target/shutdown.sh@139 -- # sleep 1 00:19:53.941 04:11:08 -- target/shutdown.sh@142 -- # kill -9 3874698 00:19:53.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3874698) - No such process 00:19:53.941 04:11:08 -- target/shutdown.sh@142 -- # true 00:19:53.941 04:11:08 -- target/shutdown.sh@144 -- # stoptarget 00:19:53.941 04:11:08 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:53.941 04:11:08 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:53.941 04:11:08 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.941 04:11:08 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:53.941 04:11:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:53.941 04:11:08 -- nvmf/common.sh@117 -- # sync 00:19:53.941 04:11:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.941 04:11:08 -- nvmf/common.sh@120 -- # set +e 00:19:53.941 04:11:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.941 04:11:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.941 rmmod nvme_tcp 00:19:53.941 rmmod nvme_fabrics 00:19:53.941 rmmod nvme_keyring 00:19:53.941 04:11:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.941 04:11:08 -- nvmf/common.sh@124 -- # set -e 00:19:53.941 04:11:08 -- nvmf/common.sh@125 -- # return 0 00:19:53.941 04:11:08 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:53.941 04:11:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:53.941 04:11:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:53.941 04:11:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:53.941 04:11:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.941 04:11:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.941 04:11:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.941 04:11:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.941 04:11:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.845 04:11:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.845 00:19:55.845 real 0m8.235s 00:19:55.845 user 0m20.977s 00:19:55.845 sys 0m1.381s 00:19:55.845 04:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.845 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 ************************************ 00:19:55.845 END TEST nvmf_shutdown_tc3 00:19:55.845 ************************************ 00:19:55.845 04:11:10 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:55.845 00:19:55.845 real 0m31.828s 00:19:55.845 user 1m18.897s 00:19:55.845 sys 0m8.815s 00:19:55.845 04:11:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.845 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 ************************************ 00:19:55.845 END TEST nvmf_shutdown 00:19:55.845 ************************************ 00:19:55.845 04:11:10 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:19:55.845 04:11:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.845 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.103 04:11:10 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:19:56.103 04:11:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:56.103 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.103 04:11:10 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:19:56.103 04:11:10 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.103 04:11:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.104 04:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.104 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 ************************************ 00:19:56.104 START TEST nvmf_multicontroller 00:19:56.104 ************************************ 00:19:56.104 04:11:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:56.104 * Looking for test storage... 00:19:56.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.362 04:11:10 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.362 04:11:10 -- nvmf/common.sh@7 -- # uname -s 00:19:56.362 04:11:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.362 04:11:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.362 04:11:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.362 04:11:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.362 04:11:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.362 04:11:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.362 04:11:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.362 04:11:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.362 04:11:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.362 04:11:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.362 04:11:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.362 04:11:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:19:56.362 04:11:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.362 04:11:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.362 04:11:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.362 04:11:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.362 04:11:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.362 04:11:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.362 04:11:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.362 04:11:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.362 04:11:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.362 04:11:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.362 04:11:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.362 04:11:10 -- paths/export.sh@5 -- # export PATH 00:19:56.362 04:11:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.362 04:11:10 -- nvmf/common.sh@47 -- # : 0 00:19:56.362 04:11:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.362 04:11:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.362 04:11:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.362 04:11:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.362 04:11:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.362 04:11:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.362 04:11:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.363 04:11:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.363 04:11:10 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.363 04:11:10 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.363 04:11:10 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:56.363 04:11:10 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:56.363 04:11:10 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.363 04:11:10 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:56.363 04:11:10 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:56.363 04:11:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:56.363 04:11:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.363 04:11:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:56.363 04:11:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:56.363 04:11:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:56.363 04:11:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.363 04:11:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.363 04:11:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.363 04:11:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:56.363 04:11:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:56.363 04:11:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.363 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:20:01.631 04:11:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:01.631 04:11:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.631 04:11:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.631 04:11:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.631 04:11:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.631 04:11:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.631 04:11:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.631 04:11:16 -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.631 04:11:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.631 04:11:16 -- nvmf/common.sh@296 -- # e810=() 00:20:01.631 04:11:16 -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.631 04:11:16 -- nvmf/common.sh@297 -- # x722=() 00:20:01.631 04:11:16 -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.631 04:11:16 -- nvmf/common.sh@298 -- # mlx=() 00:20:01.631 04:11:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.631 04:11:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.631 04:11:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.631 04:11:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.631 04:11:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.631 04:11:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:01.631 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:01.631 04:11:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.631 04:11:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:01.631 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:01.631 04:11:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.631 04:11:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.631 04:11:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.631 04:11:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:01.631 Found net devices under 0000:af:00.0: cvl_0_0 00:20:01.631 04:11:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.631 04:11:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.631 04:11:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.631 04:11:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.631 04:11:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:01.631 Found net devices under 0000:af:00.1: cvl_0_1 00:20:01.631 04:11:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.631 04:11:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:01.631 04:11:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:01.631 04:11:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:01.631 04:11:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.631 04:11:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.631 04:11:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.631 04:11:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.631 04:11:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.631 04:11:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.631 04:11:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.631 04:11:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.631 04:11:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.631 04:11:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.631 04:11:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.631 04:11:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.631 04:11:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.889 04:11:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.889 04:11:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.889 04:11:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.889 04:11:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.889 04:11:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.889 04:11:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.889 04:11:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:20:01.889 00:20:01.889 --- 10.0.0.2 ping statistics --- 00:20:01.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.889 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:20:01.889 04:11:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:20:01.889 00:20:01.889 --- 10.0.0.1 ping statistics --- 00:20:01.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.889 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:01.889 04:11:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.889 04:11:16 -- nvmf/common.sh@411 -- # return 0 00:20:01.889 04:11:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:01.890 04:11:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.890 04:11:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:01.890 04:11:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:01.890 04:11:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.890 04:11:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:01.890 04:11:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:01.890 04:11:16 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:01.890 04:11:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:01.890 04:11:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:01.890 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:01.890 04:11:16 -- nvmf/common.sh@470 -- # nvmfpid=3879094 00:20:01.890 04:11:16 -- nvmf/common.sh@471 -- # waitforlisten 3879094 00:20:01.890 04:11:16 -- common/autotest_common.sh@817 -- # '[' -z 3879094 ']' 00:20:01.890 04:11:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:01.890 04:11:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.890 04:11:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.890 04:11:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.890 04:11:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.890 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.148 [2024-04-19 04:11:16.457951] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:02.148 [2024-04-19 04:11:16.458004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.148 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.148 [2024-04-19 04:11:16.537703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:02.148 [2024-04-19 04:11:16.625474] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.148 [2024-04-19 04:11:16.625520] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.148 [2024-04-19 04:11:16.625531] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.148 [2024-04-19 04:11:16.625541] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.148 [2024-04-19 04:11:16.625549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.148 [2024-04-19 04:11:16.625657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.148 [2024-04-19 04:11:16.625768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.148 [2024-04-19 04:11:16.625768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.406 04:11:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.406 04:11:16 -- common/autotest_common.sh@850 -- # return 0 00:20:02.406 04:11:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:02.406 04:11:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.406 04:11:16 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 [2024-04-19 04:11:16.770251] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 Malloc0 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 [2024-04-19 04:11:16.841741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 [2024-04-19 04:11:16.849694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 Malloc1 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:02.406 04:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.406 04:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.406 04:11:16 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:02.406 04:11:16 -- host/multicontroller.sh@44 -- # bdevperf_pid=3879296 00:20:02.406 04:11:16 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:02.406 04:11:16 -- host/multicontroller.sh@47 -- # waitforlisten 3879296 /var/tmp/bdevperf.sock 00:20:02.406 04:11:16 -- common/autotest_common.sh@817 -- # '[' -z 3879296 ']' 00:20:02.406 04:11:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.406 04:11:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:02.406 04:11:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.406 04:11:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:02.406 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 04:11:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.972 04:11:17 -- common/autotest_common.sh@850 -- # return 0 00:20:02.972 04:11:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:02.972 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.972 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 NVMe0n1 00:20:02.972 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.972 04:11:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.972 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.972 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 04:11:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:02.972 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.972 1 00:20:02.972 04:11:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:02.972 04:11:17 -- common/autotest_common.sh@638 -- # local es=0 00:20:02.972 04:11:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:02.972 04:11:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.972 04:11:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:02.972 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.972 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:02.972 request: 00:20:02.972 { 00:20:02.972 "name": "NVMe0", 00:20:02.972 "trtype": "tcp", 00:20:02.972 "traddr": "10.0.0.2", 00:20:02.972 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:02.972 "hostaddr": "10.0.0.2", 00:20:02.972 "hostsvcid": "60000", 00:20:02.972 "adrfam": "ipv4", 00:20:02.972 "trsvcid": "4420", 00:20:02.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.972 "method": "bdev_nvme_attach_controller", 00:20:02.972 "req_id": 1 00:20:02.972 } 00:20:02.972 Got JSON-RPC error response 00:20:02.972 response: 00:20:02.972 { 00:20:02.972 "code": -114, 00:20:02.972 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:02.972 } 00:20:02.972 04:11:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:02.972 04:11:17 -- common/autotest_common.sh@641 -- # es=1 00:20:02.972 04:11:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:02.972 04:11:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:02.972 04:11:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:02.972 04:11:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:02.972 04:11:17 -- common/autotest_common.sh@638 -- # local es=0 00:20:02.972 04:11:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:02.972 04:11:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:02.972 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:02.972 04:11:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:02.972 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.972 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.230 request: 00:20:03.230 { 00:20:03.230 "name": "NVMe0", 00:20:03.230 "trtype": "tcp", 00:20:03.230 "traddr": "10.0.0.2", 00:20:03.230 "hostaddr": "10.0.0.2", 00:20:03.230 "hostsvcid": "60000", 00:20:03.230 "adrfam": "ipv4", 00:20:03.230 "trsvcid": "4420", 00:20:03.230 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.230 "method": "bdev_nvme_attach_controller", 00:20:03.230 "req_id": 1 00:20:03.230 } 00:20:03.230 Got JSON-RPC error response 00:20:03.230 response: 00:20:03.230 { 00:20:03.230 "code": -114, 00:20:03.230 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:03.230 } 00:20:03.230 04:11:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:03.230 04:11:17 -- common/autotest_common.sh@641 -- # es=1 00:20:03.230 04:11:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.230 04:11:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.230 04:11:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.230 04:11:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:03.230 04:11:17 -- common/autotest_common.sh@638 -- # local es=0 00:20:03.230 04:11:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:03.230 04:11:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:03.230 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.230 04:11:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:03.230 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.230 04:11:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:03.230 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.230 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.230 request: 00:20:03.230 { 00:20:03.230 "name": "NVMe0", 00:20:03.230 "trtype": "tcp", 00:20:03.230 "traddr": "10.0.0.2", 00:20:03.230 "hostaddr": "10.0.0.2", 00:20:03.230 "hostsvcid": "60000", 00:20:03.230 "adrfam": "ipv4", 00:20:03.230 "trsvcid": "4420", 00:20:03.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.230 "multipath": "disable", 00:20:03.230 "method": "bdev_nvme_attach_controller", 00:20:03.230 "req_id": 1 00:20:03.230 } 00:20:03.230 Got JSON-RPC error response 00:20:03.230 response: 00:20:03.231 { 00:20:03.231 "code": -114, 00:20:03.231 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:03.231 } 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:03.231 04:11:17 -- common/autotest_common.sh@641 -- # es=1 00:20:03.231 04:11:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.231 04:11:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.231 04:11:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.231 04:11:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:03.231 04:11:17 -- common/autotest_common.sh@638 -- # local es=0 00:20:03.231 04:11:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:03.231 04:11:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:03.231 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.231 04:11:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:03.231 04:11:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:03.231 04:11:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:03.231 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.231 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 request: 00:20:03.231 { 00:20:03.231 "name": "NVMe0", 00:20:03.231 "trtype": "tcp", 00:20:03.231 "traddr": "10.0.0.2", 00:20:03.231 "hostaddr": "10.0.0.2", 00:20:03.231 "hostsvcid": "60000", 00:20:03.231 "adrfam": "ipv4", 00:20:03.231 "trsvcid": "4420", 00:20:03.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.231 "multipath": "failover", 00:20:03.231 "method": "bdev_nvme_attach_controller", 00:20:03.231 "req_id": 1 00:20:03.231 } 00:20:03.231 Got JSON-RPC error response 00:20:03.231 response: 00:20:03.231 { 00:20:03.231 "code": -114, 00:20:03.231 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:03.231 } 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:03.231 04:11:17 -- common/autotest_common.sh@641 -- # es=1 00:20:03.231 04:11:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:03.231 04:11:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:03.231 04:11:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:03.231 04:11:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:03.231 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.231 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.231 04:11:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:03.231 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.231 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.231 04:11:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:03.231 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.231 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.231 04:11:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:03.231 04:11:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:03.231 04:11:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.231 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:20:03.231 04:11:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.231 04:11:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:03.231 04:11:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.604 0 00:20:04.604 04:11:18 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:04.604 04:11:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.604 04:11:18 -- common/autotest_common.sh@10 -- # set +x 00:20:04.604 04:11:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.604 04:11:18 -- host/multicontroller.sh@100 -- # killprocess 3879296 00:20:04.604 04:11:18 -- common/autotest_common.sh@936 -- # '[' -z 3879296 ']' 00:20:04.604 04:11:18 -- common/autotest_common.sh@940 -- # kill -0 3879296 00:20:04.604 04:11:18 -- common/autotest_common.sh@941 -- # uname 00:20:04.604 04:11:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.604 04:11:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3879296 00:20:04.604 04:11:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:04.604 04:11:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:04.604 04:11:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3879296' 00:20:04.604 killing process with pid 3879296 00:20:04.605 04:11:18 -- common/autotest_common.sh@955 -- # kill 3879296 00:20:04.605 04:11:18 -- common/autotest_common.sh@960 -- # wait 3879296 00:20:04.863 04:11:19 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.863 04:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.863 04:11:19 -- common/autotest_common.sh@10 -- # set +x 00:20:04.863 04:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.863 04:11:19 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:04.863 04:11:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.863 04:11:19 -- common/autotest_common.sh@10 -- # set +x 00:20:04.863 04:11:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.863 04:11:19 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:04.863 04:11:19 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:04.863 04:11:19 -- common/autotest_common.sh@1598 -- # read -r file 00:20:04.863 04:11:19 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:04.863 04:11:19 -- common/autotest_common.sh@1597 -- # sort -u 00:20:04.863 04:11:19 -- common/autotest_common.sh@1599 -- # cat 00:20:04.863 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:04.863 [2024-04-19 04:11:16.946620] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:04.863 [2024-04-19 04:11:16.946681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879296 ] 00:20:04.863 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.863 [2024-04-19 04:11:17.028762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.863 [2024-04-19 04:11:17.114469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.863 [2024-04-19 04:11:17.727098] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 2b363395-1b72-41df-9eac-031a1c9cd442 already exists 00:20:04.863 [2024-04-19 04:11:17.727136] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:2b363395-1b72-41df-9eac-031a1c9cd442 alias for bdev NVMe1n1 00:20:04.863 [2024-04-19 04:11:17.727149] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:04.863 Running I/O for 1 seconds... 00:20:04.863 00:20:04.863 Latency(us) 00:20:04.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.863 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:04.863 NVMe0n1 : 1.01 16252.48 63.49 0.00 0.00 7863.85 6672.76 14656.23 00:20:04.863 =================================================================================================================== 00:20:04.863 Total : 16252.48 63.49 0.00 0.00 7863.85 6672.76 14656.23 00:20:04.863 Received shutdown signal, test time was about 1.000000 seconds 00:20:04.863 00:20:04.863 Latency(us) 00:20:04.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.863 =================================================================================================================== 00:20:04.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.863 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:04.863 04:11:19 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:04.863 04:11:19 -- common/autotest_common.sh@1598 -- # read -r file 00:20:04.863 04:11:19 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:04.863 04:11:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:04.863 04:11:19 -- nvmf/common.sh@117 -- # sync 00:20:04.863 04:11:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.863 04:11:19 -- nvmf/common.sh@120 -- # set +e 00:20:04.863 04:11:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.863 04:11:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.863 rmmod nvme_tcp 00:20:04.863 rmmod nvme_fabrics 00:20:04.863 rmmod nvme_keyring 00:20:04.863 04:11:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.863 04:11:19 -- nvmf/common.sh@124 -- # set -e 00:20:04.863 04:11:19 -- nvmf/common.sh@125 -- # return 0 00:20:04.863 04:11:19 -- nvmf/common.sh@478 -- # '[' -n 3879094 ']' 00:20:04.863 04:11:19 -- nvmf/common.sh@479 -- # killprocess 3879094 00:20:04.863 04:11:19 -- common/autotest_common.sh@936 -- # '[' -z 3879094 ']' 00:20:04.863 04:11:19 -- common/autotest_common.sh@940 -- # kill -0 3879094 00:20:04.863 04:11:19 -- common/autotest_common.sh@941 -- # uname 00:20:04.863 04:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.863 04:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3879094 00:20:04.863 04:11:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:04.863 04:11:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:04.863 04:11:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3879094' 00:20:04.863 killing process with pid 3879094 00:20:04.863 04:11:19 -- common/autotest_common.sh@955 -- # kill 3879094 00:20:04.863 04:11:19 -- common/autotest_common.sh@960 -- # wait 3879094 00:20:05.123 04:11:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:05.123 04:11:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:05.123 04:11:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:05.123 04:11:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.123 04:11:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.123 04:11:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.123 04:11:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.123 04:11:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.657 04:11:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.657 00:20:07.657 real 0m11.130s 00:20:07.657 user 0m12.848s 00:20:07.657 sys 0m5.043s 00:20:07.657 04:11:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:07.657 04:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 ************************************ 00:20:07.657 END TEST nvmf_multicontroller 00:20:07.657 ************************************ 00:20:07.657 04:11:21 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:07.657 04:11:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:07.657 04:11:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.657 04:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 ************************************ 00:20:07.657 START TEST nvmf_aer 00:20:07.657 ************************************ 00:20:07.657 04:11:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:07.657 * Looking for test storage... 00:20:07.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:07.657 04:11:21 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.657 04:11:21 -- nvmf/common.sh@7 -- # uname -s 00:20:07.657 04:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.657 04:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.657 04:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.657 04:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.657 04:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.657 04:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.657 04:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.657 04:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.657 04:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.657 04:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.657 04:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:07.657 04:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:07.657 04:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.657 04:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.657 04:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.657 04:11:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.657 04:11:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.657 04:11:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.657 04:11:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.657 04:11:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.657 04:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.657 04:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.657 04:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.657 04:11:21 -- paths/export.sh@5 -- # export PATH 00:20:07.658 04:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.658 04:11:21 -- nvmf/common.sh@47 -- # : 0 00:20:07.658 04:11:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.658 04:11:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.658 04:11:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.658 04:11:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.658 04:11:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.658 04:11:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.658 04:11:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.658 04:11:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.658 04:11:21 -- host/aer.sh@11 -- # nvmftestinit 00:20:07.658 04:11:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:07.658 04:11:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.658 04:11:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:07.658 04:11:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:07.658 04:11:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:07.658 04:11:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.658 04:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.658 04:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.658 04:11:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:07.658 04:11:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:07.658 04:11:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.658 04:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.927 04:11:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:12.927 04:11:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.927 04:11:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.927 04:11:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.927 04:11:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.927 04:11:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.927 04:11:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.927 04:11:27 -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.927 04:11:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.927 04:11:27 -- nvmf/common.sh@296 -- # e810=() 00:20:12.927 04:11:27 -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.927 04:11:27 -- nvmf/common.sh@297 -- # x722=() 00:20:12.927 04:11:27 -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.927 04:11:27 -- nvmf/common.sh@298 -- # mlx=() 00:20:12.927 04:11:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.927 04:11:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.927 04:11:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.927 04:11:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.927 04:11:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.927 04:11:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.927 04:11:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.927 04:11:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.927 04:11:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.927 04:11:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:12.927 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:12.927 04:11:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.927 04:11:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.927 04:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.928 04:11:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:12.928 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:12.928 04:11:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.928 04:11:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.928 04:11:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.928 04:11:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.928 04:11:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.928 04:11:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:12.928 Found net devices under 0000:af:00.0: cvl_0_0 00:20:12.928 04:11:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.928 04:11:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.928 04:11:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.928 04:11:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.928 04:11:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.928 04:11:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:12.928 Found net devices under 0000:af:00.1: cvl_0_1 00:20:12.928 04:11:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.928 04:11:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:12.928 04:11:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:12.928 04:11:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:12.928 04:11:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.928 04:11:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.928 04:11:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.928 04:11:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.928 04:11:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.928 04:11:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.928 04:11:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.928 04:11:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.928 04:11:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.928 04:11:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.928 04:11:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.928 04:11:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.928 04:11:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.928 04:11:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.928 04:11:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.928 04:11:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.928 04:11:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.928 04:11:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.928 04:11:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.928 04:11:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:20:12.928 00:20:12.928 --- 10.0.0.2 ping statistics --- 00:20:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.928 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:12.928 04:11:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:20:12.928 00:20:12.928 --- 10.0.0.1 ping statistics --- 00:20:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.928 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:12.928 04:11:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.928 04:11:27 -- nvmf/common.sh@411 -- # return 0 00:20:12.928 04:11:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:12.928 04:11:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.928 04:11:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:12.928 04:11:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.928 04:11:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:12.928 04:11:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:12.928 04:11:27 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:12.928 04:11:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.928 04:11:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.928 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:12.928 04:11:27 -- nvmf/common.sh@470 -- # nvmfpid=3883299 00:20:12.928 04:11:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:12.928 04:11:27 -- nvmf/common.sh@471 -- # waitforlisten 3883299 00:20:12.928 04:11:27 -- common/autotest_common.sh@817 -- # '[' -z 3883299 ']' 00:20:12.928 04:11:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.928 04:11:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.928 04:11:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.928 04:11:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.928 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:12.928 [2024-04-19 04:11:27.452206] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:12.928 [2024-04-19 04:11:27.452263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.187 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.187 [2024-04-19 04:11:27.538321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.187 [2024-04-19 04:11:27.625310] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.187 [2024-04-19 04:11:27.625361] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.187 [2024-04-19 04:11:27.625372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.187 [2024-04-19 04:11:27.625381] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.187 [2024-04-19 04:11:27.625389] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.187 [2024-04-19 04:11:27.625489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.187 [2024-04-19 04:11:27.625591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.187 [2024-04-19 04:11:27.625704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.187 [2024-04-19 04:11:27.625705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.119 04:11:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.119 04:11:28 -- common/autotest_common.sh@850 -- # return 0 00:20:14.119 04:11:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.119 04:11:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 04:11:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.119 04:11:28 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 [2024-04-19 04:11:28.436197] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.119 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.119 04:11:28 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 Malloc0 00:20:14.119 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.119 04:11:28 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.119 04:11:28 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.119 04:11:28 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 [2024-04-19 04:11:28.492010] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.119 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.119 04:11:28 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:14.119 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.119 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 [2024-04-19 04:11:28.499753] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:14.119 [ 00:20:14.119 { 00:20:14.119 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:14.119 "subtype": "Discovery", 00:20:14.119 "listen_addresses": [], 00:20:14.119 "allow_any_host": true, 00:20:14.119 "hosts": [] 00:20:14.119 }, 00:20:14.119 { 00:20:14.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.119 "subtype": "NVMe", 00:20:14.119 "listen_addresses": [ 00:20:14.119 { 00:20:14.119 "transport": "TCP", 00:20:14.119 "trtype": "TCP", 00:20:14.119 "adrfam": "IPv4", 00:20:14.119 "traddr": "10.0.0.2", 00:20:14.119 "trsvcid": "4420" 00:20:14.119 } 00:20:14.119 ], 00:20:14.119 "allow_any_host": true, 00:20:14.119 "hosts": [], 00:20:14.119 "serial_number": "SPDK00000000000001", 00:20:14.119 "model_number": "SPDK bdev Controller", 00:20:14.119 "max_namespaces": 2, 00:20:14.119 "min_cntlid": 1, 00:20:14.119 "max_cntlid": 65519, 00:20:14.119 "namespaces": [ 00:20:14.119 { 00:20:14.119 "nsid": 1, 00:20:14.119 "bdev_name": "Malloc0", 00:20:14.119 "name": "Malloc0", 00:20:14.119 "nguid": "951BE231CB3F4EBF9C7F4ACE762A9F0F", 00:20:14.119 "uuid": "951be231-cb3f-4ebf-9c7f-4ace762a9f0f" 00:20:14.119 } 00:20:14.120 ] 00:20:14.120 } 00:20:14.120 ] 00:20:14.120 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.120 04:11:28 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:14.120 04:11:28 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:14.120 04:11:28 -- host/aer.sh@33 -- # aerpid=3883568 00:20:14.120 04:11:28 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:14.120 04:11:28 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:14.120 04:11:28 -- common/autotest_common.sh@1251 -- # local i=0 00:20:14.120 04:11:28 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:14.120 04:11:28 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:20:14.120 04:11:28 -- common/autotest_common.sh@1254 -- # i=1 00:20:14.120 04:11:28 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:14.120 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.120 04:11:28 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:14.120 04:11:28 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:20:14.120 04:11:28 -- common/autotest_common.sh@1254 -- # i=2 00:20:14.120 04:11:28 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:14.378 04:11:28 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:14.378 04:11:28 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:14.378 04:11:28 -- common/autotest_common.sh@1262 -- # return 0 00:20:14.378 04:11:28 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 Malloc1 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 [ 00:20:14.378 { 00:20:14.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:14.378 "subtype": "Discovery", 00:20:14.378 "listen_addresses": [], 00:20:14.378 "allow_any_host": true, 00:20:14.378 "hosts": [] 00:20:14.378 }, 00:20:14.378 { 00:20:14.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.378 "subtype": "NVMe", 00:20:14.378 "listen_addresses": [ 00:20:14.378 { 00:20:14.378 "transport": "TCP", 00:20:14.378 "trtype": "TCP", 00:20:14.378 "adrfam": "IPv4", 00:20:14.378 "traddr": "10.0.0.2", 00:20:14.378 "trsvcid": "4420" 00:20:14.378 } 00:20:14.378 ], 00:20:14.378 "allow_any_host": true, 00:20:14.378 "hosts": [], 00:20:14.378 "serial_number": "SPDK00000000000001", 00:20:14.378 "model_number": "SPDK bdev Controller", 00:20:14.378 "max_namespaces": 2, 00:20:14.378 "min_cntlid": 1, 00:20:14.378 "max_cntlid": 65519, 00:20:14.378 "namespaces": [ 00:20:14.378 { 00:20:14.378 "nsid": 1, 00:20:14.378 "bdev_name": "Malloc0", 00:20:14.378 "name": "Malloc0", 00:20:14.378 "nguid": "951BE231CB3F4EBF9C7F4ACE762A9F0F", 00:20:14.378 "uuid": "951be231-cb3f-4ebf-9c7f-4ace762a9f0f" 00:20:14.378 }, 00:20:14.378 { 00:20:14.378 Asynchronous Event Request test 00:20:14.378 Attaching to 10.0.0.2 00:20:14.378 Attached to 10.0.0.2 00:20:14.378 Registering asynchronous event callbacks... 00:20:14.378 Starting namespace attribute notice tests for all controllers... 00:20:14.378 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:14.378 aer_cb - Changed Namespace 00:20:14.378 Cleaning up... 00:20:14.378 "nsid": 2, 00:20:14.378 "bdev_name": "Malloc1", 00:20:14.378 "name": "Malloc1", 00:20:14.378 "nguid": "E4DC522A284D46CE8A66D137F2A054CB", 00:20:14.378 "uuid": "e4dc522a-284d-46ce-8a66-d137f2a054cb" 00:20:14.378 } 00:20:14.378 ] 00:20:14.378 } 00:20:14.378 ] 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@43 -- # wait 3883568 00:20:14.378 04:11:28 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.378 04:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.378 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.378 04:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.378 04:11:28 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:14.378 04:11:28 -- host/aer.sh@51 -- # nvmftestfini 00:20:14.378 04:11:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:14.378 04:11:28 -- nvmf/common.sh@117 -- # sync 00:20:14.378 04:11:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.378 04:11:28 -- nvmf/common.sh@120 -- # set +e 00:20:14.378 04:11:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.378 04:11:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.378 rmmod nvme_tcp 00:20:14.378 rmmod nvme_fabrics 00:20:14.378 rmmod nvme_keyring 00:20:14.637 04:11:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.637 04:11:28 -- nvmf/common.sh@124 -- # set -e 00:20:14.637 04:11:28 -- nvmf/common.sh@125 -- # return 0 00:20:14.637 04:11:28 -- nvmf/common.sh@478 -- # '[' -n 3883299 ']' 00:20:14.637 04:11:28 -- nvmf/common.sh@479 -- # killprocess 3883299 00:20:14.637 04:11:28 -- common/autotest_common.sh@936 -- # '[' -z 3883299 ']' 00:20:14.637 04:11:28 -- common/autotest_common.sh@940 -- # kill -0 3883299 00:20:14.637 04:11:28 -- common/autotest_common.sh@941 -- # uname 00:20:14.637 04:11:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.637 04:11:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3883299 00:20:14.637 04:11:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.637 04:11:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.637 04:11:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3883299' 00:20:14.637 killing process with pid 3883299 00:20:14.637 04:11:28 -- common/autotest_common.sh@955 -- # kill 3883299 00:20:14.637 [2024-04-19 04:11:28.974973] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:14.637 04:11:28 -- common/autotest_common.sh@960 -- # wait 3883299 00:20:14.896 04:11:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:14.896 04:11:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:14.896 04:11:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:14.896 04:11:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.896 04:11:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.896 04:11:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.896 04:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.896 04:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.800 04:11:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.800 00:20:16.800 real 0m9.429s 00:20:16.800 user 0m7.821s 00:20:16.800 sys 0m4.586s 00:20:16.800 04:11:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:16.800 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:16.800 ************************************ 00:20:16.800 END TEST nvmf_aer 00:20:16.800 ************************************ 00:20:16.800 04:11:31 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:16.800 04:11:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:16.800 04:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:16.800 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:17.059 ************************************ 00:20:17.059 START TEST nvmf_async_init 00:20:17.059 ************************************ 00:20:17.059 04:11:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:17.059 * Looking for test storage... 00:20:17.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:17.059 04:11:31 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.059 04:11:31 -- nvmf/common.sh@7 -- # uname -s 00:20:17.059 04:11:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.059 04:11:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.059 04:11:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.059 04:11:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.059 04:11:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.059 04:11:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.059 04:11:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.059 04:11:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.059 04:11:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.059 04:11:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.059 04:11:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:17.059 04:11:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:17.059 04:11:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.059 04:11:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.059 04:11:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.059 04:11:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.059 04:11:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.059 04:11:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.059 04:11:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.059 04:11:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.059 04:11:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.059 04:11:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.059 04:11:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.059 04:11:31 -- paths/export.sh@5 -- # export PATH 00:20:17.059 04:11:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.059 04:11:31 -- nvmf/common.sh@47 -- # : 0 00:20:17.059 04:11:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.059 04:11:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.059 04:11:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.059 04:11:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.059 04:11:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.059 04:11:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.059 04:11:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.059 04:11:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.059 04:11:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:17.059 04:11:31 -- host/async_init.sh@14 -- # null_block_size=512 00:20:17.060 04:11:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:17.060 04:11:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:17.060 04:11:31 -- host/async_init.sh@20 -- # uuidgen 00:20:17.060 04:11:31 -- host/async_init.sh@20 -- # tr -d - 00:20:17.060 04:11:31 -- host/async_init.sh@20 -- # nguid=bf49d361d34841b1a241efef53cafcdb 00:20:17.060 04:11:31 -- host/async_init.sh@22 -- # nvmftestinit 00:20:17.060 04:11:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:17.060 04:11:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.060 04:11:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:17.060 04:11:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:17.060 04:11:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:17.060 04:11:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.060 04:11:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.060 04:11:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.060 04:11:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:17.060 04:11:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:17.060 04:11:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.060 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:23.678 04:11:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:23.679 04:11:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.679 04:11:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.679 04:11:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.679 04:11:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.679 04:11:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.679 04:11:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.679 04:11:36 -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.679 04:11:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.679 04:11:36 -- nvmf/common.sh@296 -- # e810=() 00:20:23.679 04:11:36 -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.679 04:11:36 -- nvmf/common.sh@297 -- # x722=() 00:20:23.679 04:11:36 -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.679 04:11:36 -- nvmf/common.sh@298 -- # mlx=() 00:20:23.679 04:11:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.679 04:11:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.679 04:11:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.679 04:11:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.679 04:11:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.679 04:11:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:23.679 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:23.679 04:11:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.679 04:11:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:23.679 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:23.679 04:11:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.679 04:11:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.679 04:11:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.679 04:11:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:23.679 Found net devices under 0000:af:00.0: cvl_0_0 00:20:23.679 04:11:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.679 04:11:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.679 04:11:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.679 04:11:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.679 04:11:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:23.679 Found net devices under 0000:af:00.1: cvl_0_1 00:20:23.679 04:11:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.679 04:11:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:23.679 04:11:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:23.679 04:11:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:23.679 04:11:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.679 04:11:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.679 04:11:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.679 04:11:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.679 04:11:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.679 04:11:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.679 04:11:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.679 04:11:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.679 04:11:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.679 04:11:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.679 04:11:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.679 04:11:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.679 04:11:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.679 04:11:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.679 04:11:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.679 04:11:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.679 04:11:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.679 04:11:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.679 04:11:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.679 04:11:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:20:23.679 00:20:23.679 --- 10.0.0.2 ping statistics --- 00:20:23.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.679 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:23.679 04:11:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:20:23.679 00:20:23.679 --- 10.0.0.1 ping statistics --- 00:20:23.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.679 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:20:23.679 04:11:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.679 04:11:37 -- nvmf/common.sh@411 -- # return 0 00:20:23.679 04:11:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:23.679 04:11:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.679 04:11:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:23.679 04:11:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:23.679 04:11:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.679 04:11:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:23.679 04:11:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:23.679 04:11:37 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:23.679 04:11:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:23.679 04:11:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:23.679 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.679 04:11:37 -- nvmf/common.sh@470 -- # nvmfpid=3887162 00:20:23.679 04:11:37 -- nvmf/common.sh@471 -- # waitforlisten 3887162 00:20:23.679 04:11:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:23.679 04:11:37 -- common/autotest_common.sh@817 -- # '[' -z 3887162 ']' 00:20:23.679 04:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.679 04:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:23.679 04:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.679 04:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:23.679 04:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.679 [2024-04-19 04:11:37.286983] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:23.679 [2024-04-19 04:11:37.287050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.679 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.679 [2024-04-19 04:11:37.374530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.679 [2024-04-19 04:11:37.465793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.679 [2024-04-19 04:11:37.465839] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.679 [2024-04-19 04:11:37.465849] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.679 [2024-04-19 04:11:37.465857] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.679 [2024-04-19 04:11:37.465865] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.679 [2024-04-19 04:11:37.465886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.947 04:11:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.947 04:11:38 -- common/autotest_common.sh@850 -- # return 0 00:20:23.947 04:11:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.947 04:11:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 04:11:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.947 04:11:38 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 [2024-04-19 04:11:38.262461] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 null0 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bf49d361d34841b1a241efef53cafcdb 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:23.947 [2024-04-19 04:11:38.306730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.947 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.947 04:11:38 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:23.947 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.947 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.206 nvme0n1 00:20:24.206 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.206 04:11:38 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.206 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.206 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.206 [ 00:20:24.206 { 00:20:24.206 "name": "nvme0n1", 00:20:24.206 "aliases": [ 00:20:24.206 "bf49d361-d348-41b1-a241-efef53cafcdb" 00:20:24.206 ], 00:20:24.206 "product_name": "NVMe disk", 00:20:24.206 "block_size": 512, 00:20:24.206 "num_blocks": 2097152, 00:20:24.206 "uuid": "bf49d361-d348-41b1-a241-efef53cafcdb", 00:20:24.206 "assigned_rate_limits": { 00:20:24.206 "rw_ios_per_sec": 0, 00:20:24.206 "rw_mbytes_per_sec": 0, 00:20:24.206 "r_mbytes_per_sec": 0, 00:20:24.206 "w_mbytes_per_sec": 0 00:20:24.206 }, 00:20:24.206 "claimed": false, 00:20:24.206 "zoned": false, 00:20:24.206 "supported_io_types": { 00:20:24.206 "read": true, 00:20:24.206 "write": true, 00:20:24.206 "unmap": false, 00:20:24.206 "write_zeroes": true, 00:20:24.206 "flush": true, 00:20:24.206 "reset": true, 00:20:24.206 "compare": true, 00:20:24.206 "compare_and_write": true, 00:20:24.206 "abort": true, 00:20:24.206 "nvme_admin": true, 00:20:24.206 "nvme_io": true 00:20:24.206 }, 00:20:24.206 "memory_domains": [ 00:20:24.206 { 00:20:24.206 "dma_device_id": "system", 00:20:24.206 "dma_device_type": 1 00:20:24.206 } 00:20:24.206 ], 00:20:24.206 "driver_specific": { 00:20:24.206 "nvme": [ 00:20:24.206 { 00:20:24.206 "trid": { 00:20:24.206 "trtype": "TCP", 00:20:24.206 "adrfam": "IPv4", 00:20:24.206 "traddr": "10.0.0.2", 00:20:24.206 "trsvcid": "4420", 00:20:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.206 }, 00:20:24.206 "ctrlr_data": { 00:20:24.206 "cntlid": 1, 00:20:24.206 "vendor_id": "0x8086", 00:20:24.206 "model_number": "SPDK bdev Controller", 00:20:24.206 "serial_number": "00000000000000000000", 00:20:24.206 "firmware_revision": "24.05", 00:20:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.206 "oacs": { 00:20:24.206 "security": 0, 00:20:24.206 "format": 0, 00:20:24.206 "firmware": 0, 00:20:24.206 "ns_manage": 0 00:20:24.206 }, 00:20:24.206 "multi_ctrlr": true, 00:20:24.206 "ana_reporting": false 00:20:24.206 }, 00:20:24.206 "vs": { 00:20:24.206 "nvme_version": "1.3" 00:20:24.206 }, 00:20:24.206 "ns_data": { 00:20:24.206 "id": 1, 00:20:24.206 "can_share": true 00:20:24.206 } 00:20:24.206 } 00:20:24.206 ], 00:20:24.206 "mp_policy": "active_passive" 00:20:24.206 } 00:20:24.206 } 00:20:24.206 ] 00:20:24.206 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.206 04:11:38 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:24.206 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.206 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.206 [2024-04-19 04:11:38.568958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:24.206 [2024-04-19 04:11:38.569028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664850 (9): Bad file descriptor 00:20:24.206 [2024-04-19 04:11:38.701461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:24.206 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.206 04:11:38 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.206 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.206 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.206 [ 00:20:24.206 { 00:20:24.206 "name": "nvme0n1", 00:20:24.206 "aliases": [ 00:20:24.206 "bf49d361-d348-41b1-a241-efef53cafcdb" 00:20:24.206 ], 00:20:24.206 "product_name": "NVMe disk", 00:20:24.206 "block_size": 512, 00:20:24.206 "num_blocks": 2097152, 00:20:24.206 "uuid": "bf49d361-d348-41b1-a241-efef53cafcdb", 00:20:24.206 "assigned_rate_limits": { 00:20:24.206 "rw_ios_per_sec": 0, 00:20:24.206 "rw_mbytes_per_sec": 0, 00:20:24.206 "r_mbytes_per_sec": 0, 00:20:24.206 "w_mbytes_per_sec": 0 00:20:24.206 }, 00:20:24.206 "claimed": false, 00:20:24.206 "zoned": false, 00:20:24.206 "supported_io_types": { 00:20:24.206 "read": true, 00:20:24.206 "write": true, 00:20:24.206 "unmap": false, 00:20:24.206 "write_zeroes": true, 00:20:24.206 "flush": true, 00:20:24.206 "reset": true, 00:20:24.206 "compare": true, 00:20:24.206 "compare_and_write": true, 00:20:24.206 "abort": true, 00:20:24.206 "nvme_admin": true, 00:20:24.206 "nvme_io": true 00:20:24.206 }, 00:20:24.206 "memory_domains": [ 00:20:24.206 { 00:20:24.206 "dma_device_id": "system", 00:20:24.206 "dma_device_type": 1 00:20:24.206 } 00:20:24.206 ], 00:20:24.206 "driver_specific": { 00:20:24.206 "nvme": [ 00:20:24.206 { 00:20:24.206 "trid": { 00:20:24.206 "trtype": "TCP", 00:20:24.206 "adrfam": "IPv4", 00:20:24.206 "traddr": "10.0.0.2", 00:20:24.206 "trsvcid": "4420", 00:20:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.206 }, 00:20:24.206 "ctrlr_data": { 00:20:24.206 "cntlid": 2, 00:20:24.206 "vendor_id": "0x8086", 00:20:24.206 "model_number": "SPDK bdev Controller", 00:20:24.206 "serial_number": "00000000000000000000", 00:20:24.206 "firmware_revision": "24.05", 00:20:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.206 "oacs": { 00:20:24.206 "security": 0, 00:20:24.206 "format": 0, 00:20:24.206 "firmware": 0, 00:20:24.206 "ns_manage": 0 00:20:24.206 }, 00:20:24.206 "multi_ctrlr": true, 00:20:24.206 "ana_reporting": false 00:20:24.206 }, 00:20:24.206 "vs": { 00:20:24.206 "nvme_version": "1.3" 00:20:24.206 }, 00:20:24.206 "ns_data": { 00:20:24.206 "id": 1, 00:20:24.206 "can_share": true 00:20:24.206 } 00:20:24.206 } 00:20:24.206 ], 00:20:24.206 "mp_policy": "active_passive" 00:20:24.206 } 00:20:24.206 } 00:20:24.206 ] 00:20:24.206 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.206 04:11:38 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.206 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.206 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@53 -- # mktemp 00:20:24.464 04:11:38 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.G8scBS5xDS 00:20:24.464 04:11:38 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:24.464 04:11:38 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.G8scBS5xDS 00:20:24.464 04:11:38 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 [2024-04-19 04:11:38.765589] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.464 [2024-04-19 04:11:38.765736] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8scBS5xDS 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 [2024-04-19 04:11:38.773608] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G8scBS5xDS 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 [2024-04-19 04:11:38.785644] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.464 [2024-04-19 04:11:38.785691] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.464 nvme0n1 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 [ 00:20:24.464 { 00:20:24.464 "name": "nvme0n1", 00:20:24.464 "aliases": [ 00:20:24.464 "bf49d361-d348-41b1-a241-efef53cafcdb" 00:20:24.464 ], 00:20:24.464 "product_name": "NVMe disk", 00:20:24.464 "block_size": 512, 00:20:24.464 "num_blocks": 2097152, 00:20:24.464 "uuid": "bf49d361-d348-41b1-a241-efef53cafcdb", 00:20:24.464 "assigned_rate_limits": { 00:20:24.464 "rw_ios_per_sec": 0, 00:20:24.464 "rw_mbytes_per_sec": 0, 00:20:24.464 "r_mbytes_per_sec": 0, 00:20:24.464 "w_mbytes_per_sec": 0 00:20:24.464 }, 00:20:24.464 "claimed": false, 00:20:24.464 "zoned": false, 00:20:24.464 "supported_io_types": { 00:20:24.464 "read": true, 00:20:24.464 "write": true, 00:20:24.464 "unmap": false, 00:20:24.464 "write_zeroes": true, 00:20:24.464 "flush": true, 00:20:24.464 "reset": true, 00:20:24.464 "compare": true, 00:20:24.464 "compare_and_write": true, 00:20:24.464 "abort": true, 00:20:24.464 "nvme_admin": true, 00:20:24.464 "nvme_io": true 00:20:24.464 }, 00:20:24.464 "memory_domains": [ 00:20:24.464 { 00:20:24.464 "dma_device_id": "system", 00:20:24.464 "dma_device_type": 1 00:20:24.464 } 00:20:24.464 ], 00:20:24.464 "driver_specific": { 00:20:24.464 "nvme": [ 00:20:24.464 { 00:20:24.464 "trid": { 00:20:24.464 "trtype": "TCP", 00:20:24.464 "adrfam": "IPv4", 00:20:24.464 "traddr": "10.0.0.2", 00:20:24.464 "trsvcid": "4421", 00:20:24.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:24.464 }, 00:20:24.464 "ctrlr_data": { 00:20:24.464 "cntlid": 3, 00:20:24.464 "vendor_id": "0x8086", 00:20:24.464 "model_number": "SPDK bdev Controller", 00:20:24.464 "serial_number": "00000000000000000000", 00:20:24.464 "firmware_revision": "24.05", 00:20:24.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.464 "oacs": { 00:20:24.464 "security": 0, 00:20:24.464 "format": 0, 00:20:24.464 "firmware": 0, 00:20:24.464 "ns_manage": 0 00:20:24.464 }, 00:20:24.464 "multi_ctrlr": true, 00:20:24.464 "ana_reporting": false 00:20:24.464 }, 00:20:24.464 "vs": { 00:20:24.464 "nvme_version": "1.3" 00:20:24.464 }, 00:20:24.464 "ns_data": { 00:20:24.464 "id": 1, 00:20:24.464 "can_share": true 00:20:24.464 } 00:20:24.464 } 00:20:24.464 ], 00:20:24.464 "mp_policy": "active_passive" 00:20:24.464 } 00:20:24.464 } 00:20:24.464 ] 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.464 04:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.464 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:24.464 04:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.464 04:11:38 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.G8scBS5xDS 00:20:24.464 04:11:38 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:24.464 04:11:38 -- host/async_init.sh@78 -- # nvmftestfini 00:20:24.464 04:11:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:24.464 04:11:38 -- nvmf/common.sh@117 -- # sync 00:20:24.464 04:11:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.464 04:11:38 -- nvmf/common.sh@120 -- # set +e 00:20:24.464 04:11:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.464 04:11:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.464 rmmod nvme_tcp 00:20:24.464 rmmod nvme_fabrics 00:20:24.464 rmmod nvme_keyring 00:20:24.464 04:11:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.464 04:11:38 -- nvmf/common.sh@124 -- # set -e 00:20:24.464 04:11:38 -- nvmf/common.sh@125 -- # return 0 00:20:24.464 04:11:38 -- nvmf/common.sh@478 -- # '[' -n 3887162 ']' 00:20:24.464 04:11:38 -- nvmf/common.sh@479 -- # killprocess 3887162 00:20:24.464 04:11:38 -- common/autotest_common.sh@936 -- # '[' -z 3887162 ']' 00:20:24.464 04:11:38 -- common/autotest_common.sh@940 -- # kill -0 3887162 00:20:24.464 04:11:38 -- common/autotest_common.sh@941 -- # uname 00:20:24.464 04:11:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.464 04:11:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3887162 00:20:24.723 04:11:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:24.723 04:11:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:24.723 04:11:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3887162' 00:20:24.723 killing process with pid 3887162 00:20:24.723 04:11:39 -- common/autotest_common.sh@955 -- # kill 3887162 00:20:24.723 [2024-04-19 04:11:39.008944] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:24.723 [2024-04-19 04:11:39.008975] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:24.723 04:11:39 -- common/autotest_common.sh@960 -- # wait 3887162 00:20:24.723 04:11:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:24.723 04:11:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:24.723 04:11:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:24.723 04:11:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.723 04:11:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.723 04:11:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.723 04:11:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.723 04:11:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.259 04:11:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.259 00:20:27.259 real 0m9.847s 00:20:27.259 user 0m3.742s 00:20:27.259 sys 0m4.771s 00:20:27.259 04:11:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:27.259 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.259 ************************************ 00:20:27.259 END TEST nvmf_async_init 00:20:27.259 ************************************ 00:20:27.259 04:11:41 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:27.259 04:11:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:27.259 04:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.259 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.259 ************************************ 00:20:27.259 START TEST dma 00:20:27.259 ************************************ 00:20:27.259 04:11:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:27.259 * Looking for test storage... 00:20:27.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.260 04:11:41 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.260 04:11:41 -- nvmf/common.sh@7 -- # uname -s 00:20:27.260 04:11:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.260 04:11:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.260 04:11:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.260 04:11:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.260 04:11:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.260 04:11:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.260 04:11:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.260 04:11:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.260 04:11:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.260 04:11:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.260 04:11:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:27.260 04:11:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:27.260 04:11:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.260 04:11:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.260 04:11:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.260 04:11:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.260 04:11:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.260 04:11:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.260 04:11:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.260 04:11:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.260 04:11:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.260 04:11:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.260 04:11:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.260 04:11:41 -- paths/export.sh@5 -- # export PATH 00:20:27.260 04:11:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.260 04:11:41 -- nvmf/common.sh@47 -- # : 0 00:20:27.260 04:11:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.260 04:11:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.260 04:11:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.260 04:11:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.260 04:11:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.260 04:11:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.260 04:11:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.260 04:11:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.260 04:11:41 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:27.260 04:11:41 -- host/dma.sh@13 -- # exit 0 00:20:27.260 00:20:27.260 real 0m0.128s 00:20:27.260 user 0m0.056s 00:20:27.260 sys 0m0.081s 00:20:27.260 04:11:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:27.260 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.260 ************************************ 00:20:27.260 END TEST dma 00:20:27.260 ************************************ 00:20:27.260 04:11:41 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:27.260 04:11:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:27.260 04:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.260 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:27.260 ************************************ 00:20:27.260 START TEST nvmf_identify 00:20:27.260 ************************************ 00:20:27.260 04:11:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:27.520 * Looking for test storage... 00:20:27.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.520 04:11:41 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.520 04:11:41 -- nvmf/common.sh@7 -- # uname -s 00:20:27.520 04:11:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.520 04:11:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.520 04:11:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.520 04:11:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.520 04:11:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.520 04:11:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.520 04:11:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.520 04:11:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.520 04:11:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.520 04:11:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.520 04:11:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:27.520 04:11:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:27.520 04:11:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.520 04:11:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.520 04:11:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.520 04:11:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.520 04:11:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.520 04:11:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.520 04:11:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.520 04:11:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.520 04:11:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.520 04:11:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.520 04:11:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.520 04:11:41 -- paths/export.sh@5 -- # export PATH 00:20:27.520 04:11:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.520 04:11:41 -- nvmf/common.sh@47 -- # : 0 00:20:27.520 04:11:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.520 04:11:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.520 04:11:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.520 04:11:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.520 04:11:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.520 04:11:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.520 04:11:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.520 04:11:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.520 04:11:41 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.520 04:11:41 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.520 04:11:41 -- host/identify.sh@14 -- # nvmftestinit 00:20:27.520 04:11:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:27.520 04:11:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.520 04:11:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:27.520 04:11:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:27.520 04:11:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:27.520 04:11:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.520 04:11:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.520 04:11:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.520 04:11:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:27.520 04:11:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:27.520 04:11:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.520 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:20:32.791 04:11:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:32.791 04:11:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.791 04:11:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.791 04:11:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.791 04:11:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.791 04:11:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.791 04:11:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.791 04:11:47 -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.791 04:11:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.791 04:11:47 -- nvmf/common.sh@296 -- # e810=() 00:20:32.791 04:11:47 -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.791 04:11:47 -- nvmf/common.sh@297 -- # x722=() 00:20:32.791 04:11:47 -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.791 04:11:47 -- nvmf/common.sh@298 -- # mlx=() 00:20:32.791 04:11:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.792 04:11:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.792 04:11:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.792 04:11:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.792 04:11:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.792 04:11:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:32.792 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:32.792 04:11:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.792 04:11:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:32.792 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:32.792 04:11:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.792 04:11:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.792 04:11:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.792 04:11:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:32.792 Found net devices under 0000:af:00.0: cvl_0_0 00:20:32.792 04:11:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.792 04:11:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.792 04:11:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.792 04:11:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.792 04:11:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:32.792 Found net devices under 0000:af:00.1: cvl_0_1 00:20:32.792 04:11:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.792 04:11:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:32.792 04:11:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:32.792 04:11:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:32.792 04:11:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.792 04:11:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.792 04:11:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.792 04:11:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.792 04:11:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.792 04:11:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.792 04:11:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.792 04:11:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.792 04:11:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.792 04:11:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.792 04:11:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.792 04:11:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.792 04:11:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.792 04:11:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.792 04:11:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.051 04:11:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.051 04:11:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.051 04:11:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.051 04:11:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.051 04:11:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:20:33.051 00:20:33.051 --- 10.0.0.2 ping statistics --- 00:20:33.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.051 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:33.051 04:11:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:20:33.051 00:20:33.051 --- 10.0.0.1 ping statistics --- 00:20:33.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.051 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:20:33.051 04:11:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.051 04:11:47 -- nvmf/common.sh@411 -- # return 0 00:20:33.051 04:11:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:33.051 04:11:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.051 04:11:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:33.051 04:11:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:33.051 04:11:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.051 04:11:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:33.051 04:11:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:33.051 04:11:47 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:33.051 04:11:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:33.051 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.051 04:11:47 -- host/identify.sh@19 -- # nvmfpid=3891160 00:20:33.051 04:11:47 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:33.051 04:11:47 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.051 04:11:47 -- host/identify.sh@23 -- # waitforlisten 3891160 00:20:33.051 04:11:47 -- common/autotest_common.sh@817 -- # '[' -z 3891160 ']' 00:20:33.051 04:11:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.051 04:11:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:33.051 04:11:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.051 04:11:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:33.051 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.051 [2024-04-19 04:11:47.532763] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:33.051 [2024-04-19 04:11:47.532805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.309 [2024-04-19 04:11:47.604173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.309 [2024-04-19 04:11:47.696582] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.309 [2024-04-19 04:11:47.696627] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.309 [2024-04-19 04:11:47.696637] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.309 [2024-04-19 04:11:47.696645] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.309 [2024-04-19 04:11:47.696655] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.309 [2024-04-19 04:11:47.696712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.309 [2024-04-19 04:11:47.696815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.309 [2024-04-19 04:11:47.696921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.309 [2024-04-19 04:11:47.696921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.309 04:11:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.309 04:11:47 -- common/autotest_common.sh@850 -- # return 0 00:20:33.309 04:11:47 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.309 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.309 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.309 [2024-04-19 04:11:47.817740] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.309 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.309 04:11:47 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:33.309 04:11:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.309 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 04:11:47 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 Malloc0 00:20:33.569 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.569 04:11:47 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.569 04:11:47 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.569 04:11:47 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 [2024-04-19 04:11:47.909876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.569 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.569 04:11:47 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.569 04:11:47 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:33.569 04:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.569 04:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:33.569 [2024-04-19 04:11:47.925657] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:33.569 [ 00:20:33.569 { 00:20:33.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.569 "subtype": "Discovery", 00:20:33.569 "listen_addresses": [ 00:20:33.569 { 00:20:33.569 "transport": "TCP", 00:20:33.569 "trtype": "TCP", 00:20:33.569 "adrfam": "IPv4", 00:20:33.569 "traddr": "10.0.0.2", 00:20:33.569 "trsvcid": "4420" 00:20:33.569 } 00:20:33.569 ], 00:20:33.569 "allow_any_host": true, 00:20:33.569 "hosts": [] 00:20:33.569 }, 00:20:33.569 { 00:20:33.569 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.569 "subtype": "NVMe", 00:20:33.569 "listen_addresses": [ 00:20:33.569 { 00:20:33.569 "transport": "TCP", 00:20:33.569 "trtype": "TCP", 00:20:33.569 "adrfam": "IPv4", 00:20:33.569 "traddr": "10.0.0.2", 00:20:33.569 "trsvcid": "4420" 00:20:33.569 } 00:20:33.569 ], 00:20:33.569 "allow_any_host": true, 00:20:33.569 "hosts": [], 00:20:33.569 "serial_number": "SPDK00000000000001", 00:20:33.569 "model_number": "SPDK bdev Controller", 00:20:33.569 "max_namespaces": 32, 00:20:33.569 "min_cntlid": 1, 00:20:33.569 "max_cntlid": 65519, 00:20:33.569 "namespaces": [ 00:20:33.569 { 00:20:33.569 "nsid": 1, 00:20:33.569 "bdev_name": "Malloc0", 00:20:33.569 "name": "Malloc0", 00:20:33.570 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:33.570 "eui64": "ABCDEF0123456789", 00:20:33.570 "uuid": "770221a1-05fa-4c05-b707-fa6b0f961156" 00:20:33.570 } 00:20:33.570 ] 00:20:33.570 } 00:20:33.570 ] 00:20:33.570 04:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.570 04:11:47 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:33.570 [2024-04-19 04:11:47.961050] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:33.570 [2024-04-19 04:11:47.961099] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891393 ] 00:20:33.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.570 [2024-04-19 04:11:47.998869] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:33.570 [2024-04-19 04:11:47.998922] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:33.570 [2024-04-19 04:11:47.998929] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:33.570 [2024-04-19 04:11:47.998942] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:33.570 [2024-04-19 04:11:47.998952] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:33.570 [2024-04-19 04:11:47.999335] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:33.570 [2024-04-19 04:11:47.999380] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12ffcb0 0 00:20:33.570 [2024-04-19 04:11:48.013353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:33.570 [2024-04-19 04:11:48.013367] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:33.570 [2024-04-19 04:11:48.013373] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:33.570 [2024-04-19 04:11:48.013378] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:33.570 [2024-04-19 04:11:48.013521] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.013529] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.013534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.013548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:33.570 [2024-04-19 04:11:48.013567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.021356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.021368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.021372] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021377] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.021389] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:33.570 [2024-04-19 04:11:48.021397] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:33.570 [2024-04-19 04:11:48.021404] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:33.570 [2024-04-19 04:11:48.021420] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021426] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021431] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.021441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.021458] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.021694] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.021702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.021710] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021715] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.021723] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:33.570 [2024-04-19 04:11:48.021732] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:33.570 [2024-04-19 04:11:48.021741] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021747] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021751] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.021760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.021773] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.021882] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.021890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.021895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021900] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.021907] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:33.570 [2024-04-19 04:11:48.021917] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.021926] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.021936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.021945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.021958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.022060] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.022068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.022073] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022077] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.022085] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.022097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022102] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022107] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.022115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.022129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.022223] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.022231] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.022236] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.022251] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:33.570 [2024-04-19 04:11:48.022257] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.022267] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.022374] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:33.570 [2024-04-19 04:11:48.022381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.022391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022395] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.022409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.022423] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.022520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.022528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.022533] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022538] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.022545] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:33.570 [2024-04-19 04:11:48.022556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022562] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022566] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.570 [2024-04-19 04:11:48.022574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-04-19 04:11:48.022588] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.570 [2024-04-19 04:11:48.022679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.570 [2024-04-19 04:11:48.022687] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.570 [2024-04-19 04:11:48.022692] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.570 [2024-04-19 04:11:48.022696] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.570 [2024-04-19 04:11:48.022703] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:33.570 [2024-04-19 04:11:48.022709] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.022720] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:33.571 [2024-04-19 04:11:48.022730] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.022743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.022749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.022758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-04-19 04:11:48.022774] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.571 [2024-04-19 04:11:48.022939] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.571 [2024-04-19 04:11:48.022947] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.571 [2024-04-19 04:11:48.022951] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.022957] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ffcb0): datao=0, datal=4096, cccid=0 00:20:33.571 [2024-04-19 04:11:48.022963] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1367a00) on tqpair(0x12ffcb0): expected_datao=0, payload_size=4096 00:20:33.571 [2024-04-19 04:11:48.022968] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.022978] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.022983] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.571 [2024-04-19 04:11:48.023027] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.571 [2024-04-19 04:11:48.023031] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023036] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.571 [2024-04-19 04:11:48.023046] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:33.571 [2024-04-19 04:11:48.023052] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:33.571 [2024-04-19 04:11:48.023058] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:33.571 [2024-04-19 04:11:48.023064] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:33.571 [2024-04-19 04:11:48.023070] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:33.571 [2024-04-19 04:11:48.023075] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.023086] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.023095] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023100] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023105] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.571 [2024-04-19 04:11:48.023127] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.571 [2024-04-19 04:11:48.023225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.571 [2024-04-19 04:11:48.023233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.571 [2024-04-19 04:11:48.023237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023242] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367a00) on tqpair=0x12ffcb0 00:20:33.571 [2024-04-19 04:11:48.023252] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023261] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.571 [2024-04-19 04:11:48.023279] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023289] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.571 [2024-04-19 04:11:48.023303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023308] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023313] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.571 [2024-04-19 04:11:48.023328] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023332] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023337] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.571 [2024-04-19 04:11:48.023357] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.023371] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:33.571 [2024-04-19 04:11:48.023380] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-04-19 04:11:48.023409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367a00, cid 0, qid 0 00:20:33.571 [2024-04-19 04:11:48.023415] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367b60, cid 1, qid 0 00:20:33.571 [2024-04-19 04:11:48.023421] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367cc0, cid 2, qid 0 00:20:33.571 [2024-04-19 04:11:48.023427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.571 [2024-04-19 04:11:48.023433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367f80, cid 4, qid 0 00:20:33.571 [2024-04-19 04:11:48.023580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.571 [2024-04-19 04:11:48.023589] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.571 [2024-04-19 04:11:48.023593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023598] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367f80) on tqpair=0x12ffcb0 00:20:33.571 [2024-04-19 04:11:48.023605] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:33.571 [2024-04-19 04:11:48.023611] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:33.571 [2024-04-19 04:11:48.023625] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023631] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-04-19 04:11:48.023652] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367f80, cid 4, qid 0 00:20:33.571 [2024-04-19 04:11:48.023764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.571 [2024-04-19 04:11:48.023776] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.571 [2024-04-19 04:11:48.023781] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023785] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ffcb0): datao=0, datal=4096, cccid=4 00:20:33.571 [2024-04-19 04:11:48.023791] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1367f80) on tqpair(0x12ffcb0): expected_datao=0, payload_size=4096 00:20:33.571 [2024-04-19 04:11:48.023796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023805] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023810] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.571 [2024-04-19 04:11:48.023849] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.571 [2024-04-19 04:11:48.023854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023858] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367f80) on tqpair=0x12ffcb0 00:20:33.571 [2024-04-19 04:11:48.023874] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:33.571 [2024-04-19 04:11:48.023894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-04-19 04:11:48.023917] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023922] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.023926] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12ffcb0) 00:20:33.571 [2024-04-19 04:11:48.023933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.571 [2024-04-19 04:11:48.023953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367f80, cid 4, qid 0 00:20:33.571 [2024-04-19 04:11:48.023960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13680e0, cid 5, qid 0 00:20:33.571 [2024-04-19 04:11:48.024098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.571 [2024-04-19 04:11:48.024106] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.571 [2024-04-19 04:11:48.024111] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.024116] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ffcb0): datao=0, datal=1024, cccid=4 00:20:33.571 [2024-04-19 04:11:48.024121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1367f80) on tqpair(0x12ffcb0): expected_datao=0, payload_size=1024 00:20:33.571 [2024-04-19 04:11:48.024126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.571 [2024-04-19 04:11:48.024135] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.024139] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.024146] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.572 [2024-04-19 04:11:48.024153] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.572 [2024-04-19 04:11:48.024158] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.024163] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13680e0) on tqpair=0x12ffcb0 00:20:33.572 [2024-04-19 04:11:48.064520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.572 [2024-04-19 04:11:48.064540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.572 [2024-04-19 04:11:48.064545] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064550] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367f80) on tqpair=0x12ffcb0 00:20:33.572 [2024-04-19 04:11:48.064571] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064576] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ffcb0) 00:20:33.572 [2024-04-19 04:11:48.064587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.572 [2024-04-19 04:11:48.064609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367f80, cid 4, qid 0 00:20:33.572 [2024-04-19 04:11:48.064771] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.572 [2024-04-19 04:11:48.064779] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.572 [2024-04-19 04:11:48.064784] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064788] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ffcb0): datao=0, datal=3072, cccid=4 00:20:33.572 [2024-04-19 04:11:48.064794] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1367f80) on tqpair(0x12ffcb0): expected_datao=0, payload_size=3072 00:20:33.572 [2024-04-19 04:11:48.064800] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064808] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064813] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064851] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.572 [2024-04-19 04:11:48.064859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.572 [2024-04-19 04:11:48.064864] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064868] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367f80) on tqpair=0x12ffcb0 00:20:33.572 [2024-04-19 04:11:48.064879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.064885] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12ffcb0) 00:20:33.572 [2024-04-19 04:11:48.064893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.572 [2024-04-19 04:11:48.064911] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367f80, cid 4, qid 0 00:20:33.572 [2024-04-19 04:11:48.065019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.572 [2024-04-19 04:11:48.065028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.572 [2024-04-19 04:11:48.065032] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.065037] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12ffcb0): datao=0, datal=8, cccid=4 00:20:33.572 [2024-04-19 04:11:48.065042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1367f80) on tqpair(0x12ffcb0): expected_datao=0, payload_size=8 00:20:33.572 [2024-04-19 04:11:48.065048] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.065056] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.572 [2024-04-19 04:11:48.065060] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.834 [2024-04-19 04:11:48.109528] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.834 [2024-04-19 04:11:48.109544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.834 [2024-04-19 04:11:48.109549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.834 [2024-04-19 04:11:48.109554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367f80) on tqpair=0x12ffcb0 00:20:33.834 ===================================================== 00:20:33.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:33.834 ===================================================== 00:20:33.834 Controller Capabilities/Features 00:20:33.834 ================================ 00:20:33.834 Vendor ID: 0000 00:20:33.834 Subsystem Vendor ID: 0000 00:20:33.834 Serial Number: .................... 00:20:33.834 Model Number: ........................................ 00:20:33.834 Firmware Version: 24.05 00:20:33.834 Recommended Arb Burst: 0 00:20:33.834 IEEE OUI Identifier: 00 00 00 00:20:33.834 Multi-path I/O 00:20:33.834 May have multiple subsystem ports: No 00:20:33.834 May have multiple controllers: No 00:20:33.834 Associated with SR-IOV VF: No 00:20:33.834 Max Data Transfer Size: 131072 00:20:33.834 Max Number of Namespaces: 0 00:20:33.834 Max Number of I/O Queues: 1024 00:20:33.834 NVMe Specification Version (VS): 1.3 00:20:33.834 NVMe Specification Version (Identify): 1.3 00:20:33.834 Maximum Queue Entries: 128 00:20:33.834 Contiguous Queues Required: Yes 00:20:33.834 Arbitration Mechanisms Supported 00:20:33.834 Weighted Round Robin: Not Supported 00:20:33.834 Vendor Specific: Not Supported 00:20:33.834 Reset Timeout: 15000 ms 00:20:33.834 Doorbell Stride: 4 bytes 00:20:33.834 NVM Subsystem Reset: Not Supported 00:20:33.834 Command Sets Supported 00:20:33.834 NVM Command Set: Supported 00:20:33.834 Boot Partition: Not Supported 00:20:33.834 Memory Page Size Minimum: 4096 bytes 00:20:33.834 Memory Page Size Maximum: 4096 bytes 00:20:33.834 Persistent Memory Region: Not Supported 00:20:33.834 Optional Asynchronous Events Supported 00:20:33.834 Namespace Attribute Notices: Not Supported 00:20:33.834 Firmware Activation Notices: Not Supported 00:20:33.834 ANA Change Notices: Not Supported 00:20:33.834 PLE Aggregate Log Change Notices: Not Supported 00:20:33.834 LBA Status Info Alert Notices: Not Supported 00:20:33.834 EGE Aggregate Log Change Notices: Not Supported 00:20:33.834 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.834 Zone Descriptor Change Notices: Not Supported 00:20:33.834 Discovery Log Change Notices: Supported 00:20:33.834 Controller Attributes 00:20:33.834 128-bit Host Identifier: Not Supported 00:20:33.834 Non-Operational Permissive Mode: Not Supported 00:20:33.834 NVM Sets: Not Supported 00:20:33.834 Read Recovery Levels: Not Supported 00:20:33.834 Endurance Groups: Not Supported 00:20:33.834 Predictable Latency Mode: Not Supported 00:20:33.834 Traffic Based Keep ALive: Not Supported 00:20:33.834 Namespace Granularity: Not Supported 00:20:33.834 SQ Associations: Not Supported 00:20:33.834 UUID List: Not Supported 00:20:33.834 Multi-Domain Subsystem: Not Supported 00:20:33.834 Fixed Capacity Management: Not Supported 00:20:33.834 Variable Capacity Management: Not Supported 00:20:33.834 Delete Endurance Group: Not Supported 00:20:33.834 Delete NVM Set: Not Supported 00:20:33.834 Extended LBA Formats Supported: Not Supported 00:20:33.834 Flexible Data Placement Supported: Not Supported 00:20:33.834 00:20:33.834 Controller Memory Buffer Support 00:20:33.834 ================================ 00:20:33.835 Supported: No 00:20:33.835 00:20:33.835 Persistent Memory Region Support 00:20:33.835 ================================ 00:20:33.835 Supported: No 00:20:33.835 00:20:33.835 Admin Command Set Attributes 00:20:33.835 ============================ 00:20:33.835 Security Send/Receive: Not Supported 00:20:33.835 Format NVM: Not Supported 00:20:33.835 Firmware Activate/Download: Not Supported 00:20:33.835 Namespace Management: Not Supported 00:20:33.835 Device Self-Test: Not Supported 00:20:33.835 Directives: Not Supported 00:20:33.835 NVMe-MI: Not Supported 00:20:33.835 Virtualization Management: Not Supported 00:20:33.835 Doorbell Buffer Config: Not Supported 00:20:33.835 Get LBA Status Capability: Not Supported 00:20:33.835 Command & Feature Lockdown Capability: Not Supported 00:20:33.835 Abort Command Limit: 1 00:20:33.835 Async Event Request Limit: 4 00:20:33.835 Number of Firmware Slots: N/A 00:20:33.835 Firmware Slot 1 Read-Only: N/A 00:20:33.835 Firmware Activation Without Reset: N/A 00:20:33.835 Multiple Update Detection Support: N/A 00:20:33.835 Firmware Update Granularity: No Information Provided 00:20:33.835 Per-Namespace SMART Log: No 00:20:33.835 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.835 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:33.835 Command Effects Log Page: Not Supported 00:20:33.835 Get Log Page Extended Data: Supported 00:20:33.835 Telemetry Log Pages: Not Supported 00:20:33.835 Persistent Event Log Pages: Not Supported 00:20:33.835 Supported Log Pages Log Page: May Support 00:20:33.835 Commands Supported & Effects Log Page: Not Supported 00:20:33.835 Feature Identifiers & Effects Log Page:May Support 00:20:33.835 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.835 Data Area 4 for Telemetry Log: Not Supported 00:20:33.835 Error Log Page Entries Supported: 128 00:20:33.835 Keep Alive: Not Supported 00:20:33.835 00:20:33.835 NVM Command Set Attributes 00:20:33.835 ========================== 00:20:33.835 Submission Queue Entry Size 00:20:33.835 Max: 1 00:20:33.835 Min: 1 00:20:33.835 Completion Queue Entry Size 00:20:33.835 Max: 1 00:20:33.835 Min: 1 00:20:33.835 Number of Namespaces: 0 00:20:33.835 Compare Command: Not Supported 00:20:33.835 Write Uncorrectable Command: Not Supported 00:20:33.835 Dataset Management Command: Not Supported 00:20:33.835 Write Zeroes Command: Not Supported 00:20:33.835 Set Features Save Field: Not Supported 00:20:33.835 Reservations: Not Supported 00:20:33.835 Timestamp: Not Supported 00:20:33.835 Copy: Not Supported 00:20:33.835 Volatile Write Cache: Not Present 00:20:33.835 Atomic Write Unit (Normal): 1 00:20:33.835 Atomic Write Unit (PFail): 1 00:20:33.835 Atomic Compare & Write Unit: 1 00:20:33.835 Fused Compare & Write: Supported 00:20:33.835 Scatter-Gather List 00:20:33.835 SGL Command Set: Supported 00:20:33.835 SGL Keyed: Supported 00:20:33.835 SGL Bit Bucket Descriptor: Not Supported 00:20:33.835 SGL Metadata Pointer: Not Supported 00:20:33.835 Oversized SGL: Not Supported 00:20:33.835 SGL Metadata Address: Not Supported 00:20:33.835 SGL Offset: Supported 00:20:33.835 Transport SGL Data Block: Not Supported 00:20:33.835 Replay Protected Memory Block: Not Supported 00:20:33.835 00:20:33.835 Firmware Slot Information 00:20:33.835 ========================= 00:20:33.835 Active slot: 0 00:20:33.835 00:20:33.835 00:20:33.835 Error Log 00:20:33.835 ========= 00:20:33.835 00:20:33.835 Active Namespaces 00:20:33.835 ================= 00:20:33.835 Discovery Log Page 00:20:33.835 ================== 00:20:33.835 Generation Counter: 2 00:20:33.835 Number of Records: 2 00:20:33.835 Record Format: 0 00:20:33.835 00:20:33.835 Discovery Log Entry 0 00:20:33.835 ---------------------- 00:20:33.835 Transport Type: 3 (TCP) 00:20:33.835 Address Family: 1 (IPv4) 00:20:33.835 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:33.835 Entry Flags: 00:20:33.835 Duplicate Returned Information: 1 00:20:33.835 Explicit Persistent Connection Support for Discovery: 1 00:20:33.835 Transport Requirements: 00:20:33.835 Secure Channel: Not Required 00:20:33.835 Port ID: 0 (0x0000) 00:20:33.835 Controller ID: 65535 (0xffff) 00:20:33.835 Admin Max SQ Size: 128 00:20:33.835 Transport Service Identifier: 4420 00:20:33.835 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:33.835 Transport Address: 10.0.0.2 00:20:33.835 Discovery Log Entry 1 00:20:33.835 ---------------------- 00:20:33.835 Transport Type: 3 (TCP) 00:20:33.835 Address Family: 1 (IPv4) 00:20:33.835 Subsystem Type: 2 (NVM Subsystem) 00:20:33.835 Entry Flags: 00:20:33.835 Duplicate Returned Information: 0 00:20:33.835 Explicit Persistent Connection Support for Discovery: 0 00:20:33.835 Transport Requirements: 00:20:33.835 Secure Channel: Not Required 00:20:33.835 Port ID: 0 (0x0000) 00:20:33.835 Controller ID: 65535 (0xffff) 00:20:33.835 Admin Max SQ Size: 128 00:20:33.835 Transport Service Identifier: 4420 00:20:33.835 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:33.835 Transport Address: 10.0.0.2 [2024-04-19 04:11:48.109665] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:33.835 [2024-04-19 04:11:48.109683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.835 [2024-04-19 04:11:48.109691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.835 [2024-04-19 04:11:48.109701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.835 [2024-04-19 04:11:48.109709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.835 [2024-04-19 04:11:48.109720] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.109725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.109730] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.835 [2024-04-19 04:11:48.109740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.835 [2024-04-19 04:11:48.109759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.835 [2024-04-19 04:11:48.109864] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.835 [2024-04-19 04:11:48.109873] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.835 [2024-04-19 04:11:48.109878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.109882] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.835 [2024-04-19 04:11:48.109892] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.109897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.109902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.835 [2024-04-19 04:11:48.109911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.835 [2024-04-19 04:11:48.109929] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.835 [2024-04-19 04:11:48.110035] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.835 [2024-04-19 04:11:48.110044] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.835 [2024-04-19 04:11:48.110048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.835 [2024-04-19 04:11:48.110060] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:33.835 [2024-04-19 04:11:48.110066] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:33.835 [2024-04-19 04:11:48.110079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110084] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110089] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.835 [2024-04-19 04:11:48.110097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.835 [2024-04-19 04:11:48.110110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.835 [2024-04-19 04:11:48.110200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.835 [2024-04-19 04:11:48.110209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.835 [2024-04-19 04:11:48.110213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.835 [2024-04-19 04:11:48.110232] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110237] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.835 [2024-04-19 04:11:48.110242] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.110250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.110266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.110374] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.110383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.110388] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110393] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.110406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110416] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.110425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.110439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.110574] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.110582] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.110586] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110591] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.110604] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110609] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110614] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.110623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.110636] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.110726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.110734] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.110738] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.110756] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110761] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.110775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.110787] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.110880] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.110889] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.110893] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110898] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.110911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110916] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.110921] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.110930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.110942] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111041] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111072] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111077] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111082] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111103] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111211] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111219] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111224] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111228] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111241] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111246] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111273] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111379] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111388] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111393] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111397] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111411] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111416] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111443] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111556] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111568] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111573] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111578] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111736] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111749] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111754] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111773] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111778] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.111892] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.111901] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.111905] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111910] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.111923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111928] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.111932] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.111941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.111953] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.112059] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.112068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.112072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.112076] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.112089] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.112094] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.112099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.836 [2024-04-19 04:11:48.112107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.836 [2024-04-19 04:11:48.112120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.836 [2024-04-19 04:11:48.112235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.836 [2024-04-19 04:11:48.112244] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.836 [2024-04-19 04:11:48.112248] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.112253] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.836 [2024-04-19 04:11:48.112266] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.836 [2024-04-19 04:11:48.112271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112276] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.112285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.112298] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.112394] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.112406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.112411] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.112429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112434] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112439] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.112448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.112462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.112561] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.112570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.112574] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112579] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.112591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112601] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.112610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.112622] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.112728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.112736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.112740] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.112758] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112763] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112767] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.112776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.112790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.112882] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.112891] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.112895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112900] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.112913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112918] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.112923] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.112931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.112944] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.113033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.113041] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.113048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.113066] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113071] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113076] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.113084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.113097] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.113201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.113210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.113214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.113232] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113237] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.113242] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.113250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.113263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.117355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.117368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.117373] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.117378] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.117392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.117397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.117402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12ffcb0) 00:20:33.837 [2024-04-19 04:11:48.117411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.837 [2024-04-19 04:11:48.117427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1367e20, cid 3, qid 0 00:20:33.837 [2024-04-19 04:11:48.117614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.117623] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.117627] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.117632] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1367e20) on tqpair=0x12ffcb0 00:20:33.837 [2024-04-19 04:11:48.117642] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:33.837 00:20:33.837 04:11:48 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:33.837 [2024-04-19 04:11:48.161156] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:33.837 [2024-04-19 04:11:48.161201] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891395 ] 00:20:33.837 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.837 [2024-04-19 04:11:48.198569] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:33.837 [2024-04-19 04:11:48.198619] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:33.837 [2024-04-19 04:11:48.198626] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:33.837 [2024-04-19 04:11:48.198639] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:33.837 [2024-04-19 04:11:48.198647] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:33.837 [2024-04-19 04:11:48.198849] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:33.837 [2024-04-19 04:11:48.198880] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x230fcb0 0 00:20:33.837 [2024-04-19 04:11:48.211356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:33.837 [2024-04-19 04:11:48.211373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:33.837 [2024-04-19 04:11:48.211379] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:33.837 [2024-04-19 04:11:48.211383] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:33.837 [2024-04-19 04:11:48.211423] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.211430] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.211436] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.837 [2024-04-19 04:11:48.211449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:33.837 [2024-04-19 04:11:48.211470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.837 [2024-04-19 04:11:48.220355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.837 [2024-04-19 04:11:48.220366] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.837 [2024-04-19 04:11:48.220371] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.220376] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.837 [2024-04-19 04:11:48.220387] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:33.837 [2024-04-19 04:11:48.220395] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:33.837 [2024-04-19 04:11:48.220402] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:33.837 [2024-04-19 04:11:48.220418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.220423] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.837 [2024-04-19 04:11:48.220427] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.220437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.220455] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.220634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.220643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.220647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.220659] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:33.838 [2024-04-19 04:11:48.220669] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:33.838 [2024-04-19 04:11:48.220681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220687] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.220699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.220714] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.220794] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.220803] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.220807] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220812] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.220819] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:33.838 [2024-04-19 04:11:48.220830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.220839] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.220856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.220870] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.220953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.220961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.220966] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.220978] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.220990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.220995] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.221008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.221022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.221101] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.221109] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.221114] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221119] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.221125] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:33.838 [2024-04-19 04:11:48.221131] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.221141] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.221250] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:33.838 [2024-04-19 04:11:48.221256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.221265] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221270] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221275] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.221283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.221298] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.221469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.221478] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.221483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.221495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:33.838 [2024-04-19 04:11:48.221507] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221512] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.221526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.221539] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.221653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.221661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.221666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.221677] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:33.838 [2024-04-19 04:11:48.221683] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:33.838 [2024-04-19 04:11:48.221693] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:33.838 [2024-04-19 04:11:48.221704] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:33.838 [2024-04-19 04:11:48.221717] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221722] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.221730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.838 [2024-04-19 04:11:48.221745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.221882] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.838 [2024-04-19 04:11:48.221890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.838 [2024-04-19 04:11:48.221895] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221899] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=4096, cccid=0 00:20:33.838 [2024-04-19 04:11:48.221908] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2377a00) on tqpair(0x230fcb0): expected_datao=0, payload_size=4096 00:20:33.838 [2024-04-19 04:11:48.221913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221928] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.221933] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.263353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.263368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.263372] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.263377] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.263388] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:33.838 [2024-04-19 04:11:48.263394] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:33.838 [2024-04-19 04:11:48.263400] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:33.838 [2024-04-19 04:11:48.263405] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:33.838 [2024-04-19 04:11:48.263411] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:33.838 [2024-04-19 04:11:48.263417] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:33.838 [2024-04-19 04:11:48.263429] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:33.838 [2024-04-19 04:11:48.263438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.263444] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.263448] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.838 [2024-04-19 04:11:48.263457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.838 [2024-04-19 04:11:48.263474] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.838 [2024-04-19 04:11:48.263644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.838 [2024-04-19 04:11:48.263652] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.838 [2024-04-19 04:11:48.263657] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.838 [2024-04-19 04:11:48.263661] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377a00) on tqpair=0x230fcb0 00:20:33.838 [2024-04-19 04:11:48.263671] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263676] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263681] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.263689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.839 [2024-04-19 04:11:48.263697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263705] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.263713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.839 [2024-04-19 04:11:48.263721] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263730] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.263740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.839 [2024-04-19 04:11:48.263748] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.263765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.839 [2024-04-19 04:11:48.263771] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.263786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.263794] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.263808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.839 [2024-04-19 04:11:48.263823] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377a00, cid 0, qid 0 00:20:33.839 [2024-04-19 04:11:48.263830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377b60, cid 1, qid 0 00:20:33.839 [2024-04-19 04:11:48.263836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377cc0, cid 2, qid 0 00:20:33.839 [2024-04-19 04:11:48.263842] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.839 [2024-04-19 04:11:48.263848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.839 [2024-04-19 04:11:48.263950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.839 [2024-04-19 04:11:48.263959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.839 [2024-04-19 04:11:48.263963] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.263969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.839 [2024-04-19 04:11:48.263976] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:33.839 [2024-04-19 04:11:48.263982] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.263995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264011] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.264029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.839 [2024-04-19 04:11:48.264043] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.839 [2024-04-19 04:11:48.264127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.839 [2024-04-19 04:11:48.264135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.839 [2024-04-19 04:11:48.264140] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.839 [2024-04-19 04:11:48.264208] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264230] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264235] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.264244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.839 [2024-04-19 04:11:48.264258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.839 [2024-04-19 04:11:48.264356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.839 [2024-04-19 04:11:48.264366] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.839 [2024-04-19 04:11:48.264370] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264375] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=4096, cccid=4 00:20:33.839 [2024-04-19 04:11:48.264381] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2377f80) on tqpair(0x230fcb0): expected_datao=0, payload_size=4096 00:20:33.839 [2024-04-19 04:11:48.264386] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264395] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264400] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.839 [2024-04-19 04:11:48.264466] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.839 [2024-04-19 04:11:48.264471] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.839 [2024-04-19 04:11:48.264486] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:33.839 [2024-04-19 04:11:48.264502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264514] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:33.839 [2024-04-19 04:11:48.264523] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264528] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.839 [2024-04-19 04:11:48.264537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.839 [2024-04-19 04:11:48.264552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.839 [2024-04-19 04:11:48.264653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.839 [2024-04-19 04:11:48.264662] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.839 [2024-04-19 04:11:48.264667] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264671] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=4096, cccid=4 00:20:33.839 [2024-04-19 04:11:48.264677] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2377f80) on tqpair(0x230fcb0): expected_datao=0, payload_size=4096 00:20:33.839 [2024-04-19 04:11:48.264682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264691] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264696] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264745] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.839 [2024-04-19 04:11:48.264756] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.839 [2024-04-19 04:11:48.264761] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.839 [2024-04-19 04:11:48.264765] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.264780] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.264792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.264802] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.264807] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.264815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.264830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.840 [2024-04-19 04:11:48.264922] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.840 [2024-04-19 04:11:48.264930] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.840 [2024-04-19 04:11:48.264935] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.264939] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=4096, cccid=4 00:20:33.840 [2024-04-19 04:11:48.264945] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2377f80) on tqpair(0x230fcb0): expected_datao=0, payload_size=4096 00:20:33.840 [2024-04-19 04:11:48.264950] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.264959] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.264963] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.264996] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265004] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265008] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265013] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265022] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265033] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265046] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265054] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265066] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:33.840 [2024-04-19 04:11:48.265072] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:33.840 [2024-04-19 04:11:48.265078] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:33.840 [2024-04-19 04:11:48.265094] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265118] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265123] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265127] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.840 [2024-04-19 04:11:48.265153] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.840 [2024-04-19 04:11:48.265160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23780e0, cid 5, qid 0 00:20:33.840 [2024-04-19 04:11:48.265263] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265271] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265280] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265289] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265297] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265301] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265306] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23780e0) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23780e0, cid 5, qid 0 00:20:33.840 [2024-04-19 04:11:48.265429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265438] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265442] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265447] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23780e0) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265459] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265486] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23780e0, cid 5, qid 0 00:20:33.840 [2024-04-19 04:11:48.265562] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265575] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265579] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23780e0) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23780e0, cid 5, qid 0 00:20:33.840 [2024-04-19 04:11:48.265721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.840 [2024-04-19 04:11:48.265731] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.840 [2024-04-19 04:11:48.265736] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23780e0) on tqpair=0x230fcb0 00:20:33.840 [2024-04-19 04:11:48.265756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265784] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265823] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.265828] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x230fcb0) 00:20:33.840 [2024-04-19 04:11:48.265836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.840 [2024-04-19 04:11:48.265852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23780e0, cid 5, qid 0 00:20:33.840 [2024-04-19 04:11:48.265859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377f80, cid 4, qid 0 00:20:33.840 [2024-04-19 04:11:48.265865] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2378240, cid 6, qid 0 00:20:33.840 [2024-04-19 04:11:48.265871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23783a0, cid 7, qid 0 00:20:33.840 [2024-04-19 04:11:48.266042] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.840 [2024-04-19 04:11:48.266050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.840 [2024-04-19 04:11:48.266055] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266059] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=8192, cccid=5 00:20:33.840 [2024-04-19 04:11:48.266065] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23780e0) on tqpair(0x230fcb0): expected_datao=0, payload_size=8192 00:20:33.840 [2024-04-19 04:11:48.266070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266130] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266136] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266143] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.840 [2024-04-19 04:11:48.266150] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.840 [2024-04-19 04:11:48.266154] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266159] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=512, cccid=4 00:20:33.840 [2024-04-19 04:11:48.266164] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2377f80) on tqpair(0x230fcb0): expected_datao=0, payload_size=512 00:20:33.840 [2024-04-19 04:11:48.266170] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266180] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.840 [2024-04-19 04:11:48.266185] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266192] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.841 [2024-04-19 04:11:48.266199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.841 [2024-04-19 04:11:48.266203] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266208] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=512, cccid=6 00:20:33.841 [2024-04-19 04:11:48.266213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2378240) on tqpair(0x230fcb0): expected_datao=0, payload_size=512 00:20:33.841 [2024-04-19 04:11:48.266219] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266227] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266231] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266238] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.841 [2024-04-19 04:11:48.266245] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.841 [2024-04-19 04:11:48.266250] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266254] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x230fcb0): datao=0, datal=4096, cccid=7 00:20:33.841 [2024-04-19 04:11:48.266260] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23783a0) on tqpair(0x230fcb0): expected_datao=0, payload_size=4096 00:20:33.841 [2024-04-19 04:11:48.266266] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266274] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266278] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266288] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.841 [2024-04-19 04:11:48.266296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.841 [2024-04-19 04:11:48.266300] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23780e0) on tqpair=0x230fcb0 00:20:33.841 [2024-04-19 04:11:48.266321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.841 [2024-04-19 04:11:48.266328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.841 [2024-04-19 04:11:48.266333] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266338] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377f80) on tqpair=0x230fcb0 00:20:33.841 [2024-04-19 04:11:48.266370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.841 [2024-04-19 04:11:48.266378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.841 [2024-04-19 04:11:48.266383] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266387] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2378240) on tqpair=0x230fcb0 00:20:33.841 [2024-04-19 04:11:48.266397] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.841 [2024-04-19 04:11:48.266405] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.841 [2024-04-19 04:11:48.266409] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.841 [2024-04-19 04:11:48.266414] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23783a0) on tqpair=0x230fcb0 00:20:33.841 ===================================================== 00:20:33.841 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.841 ===================================================== 00:20:33.841 Controller Capabilities/Features 00:20:33.841 ================================ 00:20:33.841 Vendor ID: 8086 00:20:33.841 Subsystem Vendor ID: 8086 00:20:33.841 Serial Number: SPDK00000000000001 00:20:33.841 Model Number: SPDK bdev Controller 00:20:33.841 Firmware Version: 24.05 00:20:33.841 Recommended Arb Burst: 6 00:20:33.841 IEEE OUI Identifier: e4 d2 5c 00:20:33.841 Multi-path I/O 00:20:33.841 May have multiple subsystem ports: Yes 00:20:33.841 May have multiple controllers: Yes 00:20:33.841 Associated with SR-IOV VF: No 00:20:33.841 Max Data Transfer Size: 131072 00:20:33.841 Max Number of Namespaces: 32 00:20:33.841 Max Number of I/O Queues: 127 00:20:33.841 NVMe Specification Version (VS): 1.3 00:20:33.841 NVMe Specification Version (Identify): 1.3 00:20:33.841 Maximum Queue Entries: 128 00:20:33.841 Contiguous Queues Required: Yes 00:20:33.841 Arbitration Mechanisms Supported 00:20:33.841 Weighted Round Robin: Not Supported 00:20:33.841 Vendor Specific: Not Supported 00:20:33.841 Reset Timeout: 15000 ms 00:20:33.841 Doorbell Stride: 4 bytes 00:20:33.841 NVM Subsystem Reset: Not Supported 00:20:33.841 Command Sets Supported 00:20:33.841 NVM Command Set: Supported 00:20:33.841 Boot Partition: Not Supported 00:20:33.841 Memory Page Size Minimum: 4096 bytes 00:20:33.841 Memory Page Size Maximum: 4096 bytes 00:20:33.841 Persistent Memory Region: Not Supported 00:20:33.841 Optional Asynchronous Events Supported 00:20:33.841 Namespace Attribute Notices: Supported 00:20:33.841 Firmware Activation Notices: Not Supported 00:20:33.841 ANA Change Notices: Not Supported 00:20:33.841 PLE Aggregate Log Change Notices: Not Supported 00:20:33.841 LBA Status Info Alert Notices: Not Supported 00:20:33.841 EGE Aggregate Log Change Notices: Not Supported 00:20:33.841 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.841 Zone Descriptor Change Notices: Not Supported 00:20:33.841 Discovery Log Change Notices: Not Supported 00:20:33.841 Controller Attributes 00:20:33.841 128-bit Host Identifier: Supported 00:20:33.841 Non-Operational Permissive Mode: Not Supported 00:20:33.841 NVM Sets: Not Supported 00:20:33.841 Read Recovery Levels: Not Supported 00:20:33.841 Endurance Groups: Not Supported 00:20:33.841 Predictable Latency Mode: Not Supported 00:20:33.841 Traffic Based Keep ALive: Not Supported 00:20:33.841 Namespace Granularity: Not Supported 00:20:33.841 SQ Associations: Not Supported 00:20:33.841 UUID List: Not Supported 00:20:33.841 Multi-Domain Subsystem: Not Supported 00:20:33.841 Fixed Capacity Management: Not Supported 00:20:33.841 Variable Capacity Management: Not Supported 00:20:33.841 Delete Endurance Group: Not Supported 00:20:33.841 Delete NVM Set: Not Supported 00:20:33.841 Extended LBA Formats Supported: Not Supported 00:20:33.841 Flexible Data Placement Supported: Not Supported 00:20:33.841 00:20:33.841 Controller Memory Buffer Support 00:20:33.841 ================================ 00:20:33.841 Supported: No 00:20:33.841 00:20:33.841 Persistent Memory Region Support 00:20:33.841 ================================ 00:20:33.841 Supported: No 00:20:33.841 00:20:33.841 Admin Command Set Attributes 00:20:33.841 ============================ 00:20:33.841 Security Send/Receive: Not Supported 00:20:33.841 Format NVM: Not Supported 00:20:33.841 Firmware Activate/Download: Not Supported 00:20:33.841 Namespace Management: Not Supported 00:20:33.841 Device Self-Test: Not Supported 00:20:33.841 Directives: Not Supported 00:20:33.841 NVMe-MI: Not Supported 00:20:33.841 Virtualization Management: Not Supported 00:20:33.841 Doorbell Buffer Config: Not Supported 00:20:33.841 Get LBA Status Capability: Not Supported 00:20:33.841 Command & Feature Lockdown Capability: Not Supported 00:20:33.841 Abort Command Limit: 4 00:20:33.841 Async Event Request Limit: 4 00:20:33.841 Number of Firmware Slots: N/A 00:20:33.841 Firmware Slot 1 Read-Only: N/A 00:20:33.841 Firmware Activation Without Reset: N/A 00:20:33.841 Multiple Update Detection Support: N/A 00:20:33.841 Firmware Update Granularity: No Information Provided 00:20:33.841 Per-Namespace SMART Log: No 00:20:33.841 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.841 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:33.841 Command Effects Log Page: Supported 00:20:33.841 Get Log Page Extended Data: Supported 00:20:33.841 Telemetry Log Pages: Not Supported 00:20:33.841 Persistent Event Log Pages: Not Supported 00:20:33.841 Supported Log Pages Log Page: May Support 00:20:33.841 Commands Supported & Effects Log Page: Not Supported 00:20:33.841 Feature Identifiers & Effects Log Page:May Support 00:20:33.841 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.841 Data Area 4 for Telemetry Log: Not Supported 00:20:33.841 Error Log Page Entries Supported: 128 00:20:33.841 Keep Alive: Supported 00:20:33.841 Keep Alive Granularity: 10000 ms 00:20:33.841 00:20:33.841 NVM Command Set Attributes 00:20:33.841 ========================== 00:20:33.841 Submission Queue Entry Size 00:20:33.841 Max: 64 00:20:33.841 Min: 64 00:20:33.841 Completion Queue Entry Size 00:20:33.841 Max: 16 00:20:33.841 Min: 16 00:20:33.841 Number of Namespaces: 32 00:20:33.841 Compare Command: Supported 00:20:33.841 Write Uncorrectable Command: Not Supported 00:20:33.841 Dataset Management Command: Supported 00:20:33.841 Write Zeroes Command: Supported 00:20:33.841 Set Features Save Field: Not Supported 00:20:33.841 Reservations: Supported 00:20:33.841 Timestamp: Not Supported 00:20:33.841 Copy: Supported 00:20:33.841 Volatile Write Cache: Present 00:20:33.841 Atomic Write Unit (Normal): 1 00:20:33.841 Atomic Write Unit (PFail): 1 00:20:33.841 Atomic Compare & Write Unit: 1 00:20:33.841 Fused Compare & Write: Supported 00:20:33.841 Scatter-Gather List 00:20:33.841 SGL Command Set: Supported 00:20:33.841 SGL Keyed: Supported 00:20:33.841 SGL Bit Bucket Descriptor: Not Supported 00:20:33.841 SGL Metadata Pointer: Not Supported 00:20:33.841 Oversized SGL: Not Supported 00:20:33.841 SGL Metadata Address: Not Supported 00:20:33.841 SGL Offset: Supported 00:20:33.841 Transport SGL Data Block: Not Supported 00:20:33.841 Replay Protected Memory Block: Not Supported 00:20:33.841 00:20:33.841 Firmware Slot Information 00:20:33.842 ========================= 00:20:33.842 Active slot: 1 00:20:33.842 Slot 1 Firmware Revision: 24.05 00:20:33.842 00:20:33.842 00:20:33.842 Commands Supported and Effects 00:20:33.842 ============================== 00:20:33.842 Admin Commands 00:20:33.842 -------------- 00:20:33.842 Get Log Page (02h): Supported 00:20:33.842 Identify (06h): Supported 00:20:33.842 Abort (08h): Supported 00:20:33.842 Set Features (09h): Supported 00:20:33.842 Get Features (0Ah): Supported 00:20:33.842 Asynchronous Event Request (0Ch): Supported 00:20:33.842 Keep Alive (18h): Supported 00:20:33.842 I/O Commands 00:20:33.842 ------------ 00:20:33.842 Flush (00h): Supported LBA-Change 00:20:33.842 Write (01h): Supported LBA-Change 00:20:33.842 Read (02h): Supported 00:20:33.842 Compare (05h): Supported 00:20:33.842 Write Zeroes (08h): Supported LBA-Change 00:20:33.842 Dataset Management (09h): Supported LBA-Change 00:20:33.842 Copy (19h): Supported LBA-Change 00:20:33.842 Unknown (79h): Supported LBA-Change 00:20:33.842 Unknown (7Ah): Supported 00:20:33.842 00:20:33.842 Error Log 00:20:33.842 ========= 00:20:33.842 00:20:33.842 Arbitration 00:20:33.842 =========== 00:20:33.842 Arbitration Burst: 1 00:20:33.842 00:20:33.842 Power Management 00:20:33.842 ================ 00:20:33.842 Number of Power States: 1 00:20:33.842 Current Power State: Power State #0 00:20:33.842 Power State #0: 00:20:33.842 Max Power: 0.00 W 00:20:33.842 Non-Operational State: Operational 00:20:33.842 Entry Latency: Not Reported 00:20:33.842 Exit Latency: Not Reported 00:20:33.842 Relative Read Throughput: 0 00:20:33.842 Relative Read Latency: 0 00:20:33.842 Relative Write Throughput: 0 00:20:33.842 Relative Write Latency: 0 00:20:33.842 Idle Power: Not Reported 00:20:33.842 Active Power: Not Reported 00:20:33.842 Non-Operational Permissive Mode: Not Supported 00:20:33.842 00:20:33.842 Health Information 00:20:33.842 ================== 00:20:33.842 Critical Warnings: 00:20:33.842 Available Spare Space: OK 00:20:33.842 Temperature: OK 00:20:33.842 Device Reliability: OK 00:20:33.842 Read Only: No 00:20:33.842 Volatile Memory Backup: OK 00:20:33.842 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:33.842 Temperature Threshold: [2024-04-19 04:11:48.266536] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.266552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.266568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23783a0, cid 7, qid 0 00:20:33.842 [2024-04-19 04:11:48.266653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.266663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.266668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23783a0) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.266706] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:33.842 [2024-04-19 04:11:48.266720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.842 [2024-04-19 04:11:48.266729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.842 [2024-04-19 04:11:48.266736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.842 [2024-04-19 04:11:48.266744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.842 [2024-04-19 04:11:48.266754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266763] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.266772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.266788] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.842 [2024-04-19 04:11:48.266867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.266876] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.266880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266885] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377e20) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.266895] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.266904] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.266912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.266930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.842 [2024-04-19 04:11:48.267016] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.267024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.267028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377e20) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.267040] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:33.842 [2024-04-19 04:11:48.267046] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:33.842 [2024-04-19 04:11:48.267058] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.267076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.267090] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.842 [2024-04-19 04:11:48.267167] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.267178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.267183] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267188] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377e20) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.267200] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267205] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267210] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.267219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.267232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.842 [2024-04-19 04:11:48.267308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.267317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.267321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.267326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377e20) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.267339] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.271362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.271369] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x230fcb0) 00:20:33.842 [2024-04-19 04:11:48.271378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.842 [2024-04-19 04:11:48.271394] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2377e20, cid 3, qid 0 00:20:33.842 [2024-04-19 04:11:48.271583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.842 [2024-04-19 04:11:48.271592] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.842 [2024-04-19 04:11:48.271596] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.842 [2024-04-19 04:11:48.271601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2377e20) on tqpair=0x230fcb0 00:20:33.842 [2024-04-19 04:11:48.271611] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:33.842 0 Kelvin (-273 Celsius) 00:20:33.842 Available Spare: 0% 00:20:33.842 Available Spare Threshold: 0% 00:20:33.842 Life Percentage Used: 0% 00:20:33.842 Data Units Read: 0 00:20:33.842 Data Units Written: 0 00:20:33.842 Host Read Commands: 0 00:20:33.842 Host Write Commands: 0 00:20:33.842 Controller Busy Time: 0 minutes 00:20:33.842 Power Cycles: 0 00:20:33.842 Power On Hours: 0 hours 00:20:33.842 Unsafe Shutdowns: 0 00:20:33.842 Unrecoverable Media Errors: 0 00:20:33.842 Lifetime Error Log Entries: 0 00:20:33.842 Warning Temperature Time: 0 minutes 00:20:33.842 Critical Temperature Time: 0 minutes 00:20:33.842 00:20:33.842 Number of Queues 00:20:33.842 ================ 00:20:33.842 Number of I/O Submission Queues: 127 00:20:33.842 Number of I/O Completion Queues: 127 00:20:33.842 00:20:33.842 Active Namespaces 00:20:33.842 ================= 00:20:33.842 Namespace ID:1 00:20:33.842 Error Recovery Timeout: Unlimited 00:20:33.842 Command Set Identifier: NVM (00h) 00:20:33.842 Deallocate: Supported 00:20:33.842 Deallocated/Unwritten Error: Not Supported 00:20:33.842 Deallocated Read Value: Unknown 00:20:33.842 Deallocate in Write Zeroes: Not Supported 00:20:33.843 Deallocated Guard Field: 0xFFFF 00:20:33.843 Flush: Supported 00:20:33.843 Reservation: Supported 00:20:33.843 Namespace Sharing Capabilities: Multiple Controllers 00:20:33.843 Size (in LBAs): 131072 (0GiB) 00:20:33.843 Capacity (in LBAs): 131072 (0GiB) 00:20:33.843 Utilization (in LBAs): 131072 (0GiB) 00:20:33.843 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:33.843 EUI64: ABCDEF0123456789 00:20:33.843 UUID: 770221a1-05fa-4c05-b707-fa6b0f961156 00:20:33.843 Thin Provisioning: Not Supported 00:20:33.843 Per-NS Atomic Units: Yes 00:20:33.843 Atomic Boundary Size (Normal): 0 00:20:33.843 Atomic Boundary Size (PFail): 0 00:20:33.843 Atomic Boundary Offset: 0 00:20:33.843 Maximum Single Source Range Length: 65535 00:20:33.843 Maximum Copy Length: 65535 00:20:33.843 Maximum Source Range Count: 1 00:20:33.843 NGUID/EUI64 Never Reused: No 00:20:33.843 Namespace Write Protected: No 00:20:33.843 Number of LBA Formats: 1 00:20:33.843 Current LBA Format: LBA Format #00 00:20:33.843 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:33.843 00:20:33.843 04:11:48 -- host/identify.sh@51 -- # sync 00:20:33.843 04:11:48 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.843 04:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.843 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:20:33.843 04:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.843 04:11:48 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:33.843 04:11:48 -- host/identify.sh@56 -- # nvmftestfini 00:20:33.843 04:11:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:33.843 04:11:48 -- nvmf/common.sh@117 -- # sync 00:20:33.843 04:11:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.843 04:11:48 -- nvmf/common.sh@120 -- # set +e 00:20:33.843 04:11:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.843 04:11:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.843 rmmod nvme_tcp 00:20:33.843 rmmod nvme_fabrics 00:20:33.843 rmmod nvme_keyring 00:20:34.101 04:11:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.101 04:11:48 -- nvmf/common.sh@124 -- # set -e 00:20:34.101 04:11:48 -- nvmf/common.sh@125 -- # return 0 00:20:34.101 04:11:48 -- nvmf/common.sh@478 -- # '[' -n 3891160 ']' 00:20:34.101 04:11:48 -- nvmf/common.sh@479 -- # killprocess 3891160 00:20:34.101 04:11:48 -- common/autotest_common.sh@936 -- # '[' -z 3891160 ']' 00:20:34.101 04:11:48 -- common/autotest_common.sh@940 -- # kill -0 3891160 00:20:34.101 04:11:48 -- common/autotest_common.sh@941 -- # uname 00:20:34.101 04:11:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.101 04:11:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3891160 00:20:34.101 04:11:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:34.101 04:11:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:34.101 04:11:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3891160' 00:20:34.101 killing process with pid 3891160 00:20:34.101 04:11:48 -- common/autotest_common.sh@955 -- # kill 3891160 00:20:34.101 [2024-04-19 04:11:48.416294] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:34.101 04:11:48 -- common/autotest_common.sh@960 -- # wait 3891160 00:20:34.360 04:11:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:34.360 04:11:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:34.360 04:11:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:34.360 04:11:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.360 04:11:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.360 04:11:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.360 04:11:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.360 04:11:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.265 04:11:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:36.265 00:20:36.265 real 0m8.978s 00:20:36.265 user 0m5.257s 00:20:36.265 sys 0m4.628s 00:20:36.265 04:11:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:36.265 04:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:36.265 ************************************ 00:20:36.265 END TEST nvmf_identify 00:20:36.265 ************************************ 00:20:36.265 04:11:50 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:36.265 04:11:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:36.266 04:11:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.266 04:11:50 -- common/autotest_common.sh@10 -- # set +x 00:20:36.524 ************************************ 00:20:36.524 START TEST nvmf_perf 00:20:36.524 ************************************ 00:20:36.524 04:11:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:36.524 * Looking for test storage... 00:20:36.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:36.524 04:11:51 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.524 04:11:51 -- nvmf/common.sh@7 -- # uname -s 00:20:36.524 04:11:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.524 04:11:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.524 04:11:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.524 04:11:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.524 04:11:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.524 04:11:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.524 04:11:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.524 04:11:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.524 04:11:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.524 04:11:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.524 04:11:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:36.524 04:11:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:36.524 04:11:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.524 04:11:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.524 04:11:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.524 04:11:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.524 04:11:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.524 04:11:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.524 04:11:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.524 04:11:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.524 04:11:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.524 04:11:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.524 04:11:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.524 04:11:51 -- paths/export.sh@5 -- # export PATH 00:20:36.524 04:11:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.524 04:11:51 -- nvmf/common.sh@47 -- # : 0 00:20:36.524 04:11:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.524 04:11:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.524 04:11:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.524 04:11:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.524 04:11:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.524 04:11:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.524 04:11:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.524 04:11:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.524 04:11:51 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:36.524 04:11:51 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:36.524 04:11:51 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.524 04:11:51 -- host/perf.sh@17 -- # nvmftestinit 00:20:36.524 04:11:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:36.524 04:11:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.524 04:11:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:36.524 04:11:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:36.524 04:11:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:36.524 04:11:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.524 04:11:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.524 04:11:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.524 04:11:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:36.524 04:11:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:36.524 04:11:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.524 04:11:51 -- common/autotest_common.sh@10 -- # set +x 00:20:41.795 04:11:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:41.795 04:11:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.795 04:11:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.795 04:11:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.795 04:11:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.795 04:11:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.795 04:11:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.795 04:11:56 -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.795 04:11:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.795 04:11:56 -- nvmf/common.sh@296 -- # e810=() 00:20:41.795 04:11:56 -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.795 04:11:56 -- nvmf/common.sh@297 -- # x722=() 00:20:41.795 04:11:56 -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.795 04:11:56 -- nvmf/common.sh@298 -- # mlx=() 00:20:41.795 04:11:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.795 04:11:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.795 04:11:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.795 04:11:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.795 04:11:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.795 04:11:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:41.795 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:41.795 04:11:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.795 04:11:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:41.795 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:41.795 04:11:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.795 04:11:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.795 04:11:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.795 04:11:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:41.795 Found net devices under 0000:af:00.0: cvl_0_0 00:20:41.795 04:11:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.795 04:11:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.795 04:11:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.795 04:11:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.795 04:11:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:41.795 Found net devices under 0000:af:00.1: cvl_0_1 00:20:41.795 04:11:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.795 04:11:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:41.795 04:11:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:41.795 04:11:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:41.795 04:11:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.795 04:11:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.795 04:11:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.795 04:11:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.795 04:11:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.795 04:11:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.795 04:11:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.795 04:11:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.795 04:11:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.795 04:11:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.795 04:11:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.795 04:11:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.795 04:11:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.055 04:11:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.055 04:11:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.055 04:11:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.055 04:11:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.055 04:11:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.055 04:11:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.055 04:11:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:20:42.055 00:20:42.055 --- 10.0.0.2 ping statistics --- 00:20:42.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.055 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:20:42.055 04:11:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:20:42.055 00:20:42.055 --- 10.0.0.1 ping statistics --- 00:20:42.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.055 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:42.055 04:11:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.055 04:11:56 -- nvmf/common.sh@411 -- # return 0 00:20:42.055 04:11:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:42.055 04:11:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.055 04:11:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:42.055 04:11:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:42.055 04:11:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.055 04:11:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:42.055 04:11:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:42.055 04:11:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:42.055 04:11:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:42.055 04:11:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.055 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:42.314 04:11:56 -- nvmf/common.sh@470 -- # nvmfpid=3894983 00:20:42.314 04:11:56 -- nvmf/common.sh@471 -- # waitforlisten 3894983 00:20:42.314 04:11:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:42.314 04:11:56 -- common/autotest_common.sh@817 -- # '[' -z 3894983 ']' 00:20:42.314 04:11:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.314 04:11:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.314 04:11:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.314 04:11:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.314 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:42.314 [2024-04-19 04:11:56.631534] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:20:42.314 [2024-04-19 04:11:56.631592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.314 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.314 [2024-04-19 04:11:56.718313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.314 [2024-04-19 04:11:56.807183] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.314 [2024-04-19 04:11:56.807225] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.314 [2024-04-19 04:11:56.807235] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.314 [2024-04-19 04:11:56.807244] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.314 [2024-04-19 04:11:56.807251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.314 [2024-04-19 04:11:56.807298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.314 [2024-04-19 04:11:56.807403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.314 [2024-04-19 04:11:56.807435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.314 [2024-04-19 04:11:56.807437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.295 04:11:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.295 04:11:57 -- common/autotest_common.sh@850 -- # return 0 00:20:43.295 04:11:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.295 04:11:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.295 04:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:43.295 04:11:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.295 04:11:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:43.295 04:11:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:46.588 04:12:00 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:46.588 04:12:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:46.588 04:12:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:20:46.588 04:12:00 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.847 04:12:01 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:46.847 04:12:01 -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:20:46.847 04:12:01 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:46.847 04:12:01 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:46.847 04:12:01 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:47.106 [2024-04-19 04:12:01.455054] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.106 04:12:01 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.364 04:12:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:47.364 04:12:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.622 04:12:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:47.622 04:12:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:47.880 04:12:02 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.138 [2024-04-19 04:12:02.459088] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.138 04:12:02 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:48.396 04:12:02 -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:20:48.396 04:12:02 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:20:48.396 04:12:02 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:48.396 04:12:02 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:20:49.772 Initializing NVMe Controllers 00:20:49.772 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:20:49.772 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:20:49.772 Initialization complete. Launching workers. 00:20:49.772 ======================================================== 00:20:49.772 Latency(us) 00:20:49.772 Device Information : IOPS MiB/s Average min max 00:20:49.772 PCIE (0000:86:00.0) NSID 1 from core 0: 70134.81 273.96 455.63 42.66 4390.93 00:20:49.772 ======================================================== 00:20:49.772 Total : 70134.81 273.96 455.63 42.66 4390.93 00:20:49.772 00:20:49.772 04:12:04 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.772 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.144 Initializing NVMe Controllers 00:20:51.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:51.145 Initialization complete. Launching workers. 00:20:51.145 ======================================================== 00:20:51.145 Latency(us) 00:20:51.145 Device Information : IOPS MiB/s Average min max 00:20:51.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 111.72 0.44 9307.34 170.49 45039.26 00:20:51.145 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.86 0.21 18952.79 7947.37 50870.92 00:20:51.145 ======================================================== 00:20:51.145 Total : 166.58 0.65 12483.99 170.49 50870.92 00:20:51.145 00:20:51.145 04:12:05 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:51.145 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.522 Initializing NVMe Controllers 00:20:52.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:52.522 Initialization complete. Launching workers. 00:20:52.522 ======================================================== 00:20:52.522 Latency(us) 00:20:52.522 Device Information : IOPS MiB/s Average min max 00:20:52.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7679.98 30.00 4175.12 454.78 10306.79 00:20:52.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3837.99 14.99 8417.23 4144.62 16106.90 00:20:52.522 ======================================================== 00:20:52.522 Total : 11517.97 44.99 5588.66 454.78 16106.90 00:20:52.522 00:20:52.522 04:12:06 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:52.522 04:12:06 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:52.522 04:12:06 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.522 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.053 Initializing NVMe Controllers 00:20:55.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.053 Controller IO queue size 128, less than required. 00:20:55.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:55.053 Controller IO queue size 128, less than required. 00:20:55.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:55.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:55.053 Initialization complete. Launching workers. 00:20:55.053 ======================================================== 00:20:55.053 Latency(us) 00:20:55.053 Device Information : IOPS MiB/s Average min max 00:20:55.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1224.06 306.01 106991.02 73565.44 141407.22 00:20:55.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.79 142.45 236038.97 71684.44 367880.90 00:20:55.053 ======================================================== 00:20:55.053 Total : 1793.85 448.46 147981.45 71684.44 367880.90 00:20:55.053 00:20:55.053 04:12:09 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:55.053 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.053 No valid NVMe controllers or AIO or URING devices found 00:20:55.053 Initializing NVMe Controllers 00:20:55.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.053 Controller IO queue size 128, less than required. 00:20:55.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:55.053 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:55.053 Controller IO queue size 128, less than required. 00:20:55.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:55.053 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:55.053 WARNING: Some requested NVMe devices were skipped 00:20:55.053 04:12:09 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:55.053 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.457 Initializing NVMe Controllers 00:20:58.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.457 Controller IO queue size 128, less than required. 00:20:58.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.457 Controller IO queue size 128, less than required. 00:20:58.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:58.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:58.457 Initialization complete. Launching workers. 00:20:58.457 00:20:58.457 ==================== 00:20:58.457 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:58.457 TCP transport: 00:20:58.457 polls: 19180 00:20:58.457 idle_polls: 8495 00:20:58.457 sock_completions: 10685 00:20:58.457 nvme_completions: 4929 00:20:58.457 submitted_requests: 7396 00:20:58.457 queued_requests: 1 00:20:58.457 00:20:58.457 ==================== 00:20:58.457 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:58.457 TCP transport: 00:20:58.457 polls: 16207 00:20:58.457 idle_polls: 5743 00:20:58.457 sock_completions: 10464 00:20:58.457 nvme_completions: 5029 00:20:58.457 submitted_requests: 7562 00:20:58.457 queued_requests: 1 00:20:58.457 ======================================================== 00:20:58.457 Latency(us) 00:20:58.457 Device Information : IOPS MiB/s Average min max 00:20:58.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1231.80 307.95 108052.64 62605.83 161543.26 00:20:58.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1256.79 314.20 103268.93 48880.13 134156.62 00:20:58.457 ======================================================== 00:20:58.457 Total : 2488.59 622.15 105636.76 48880.13 161543.26 00:20:58.457 00:20:58.457 04:12:12 -- host/perf.sh@66 -- # sync 00:20:58.457 04:12:12 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:58.457 04:12:12 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:58.457 04:12:12 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:58.457 04:12:12 -- host/perf.sh@114 -- # nvmftestfini 00:20:58.457 04:12:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:58.457 04:12:12 -- nvmf/common.sh@117 -- # sync 00:20:58.457 04:12:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@120 -- # set +e 00:20:58.457 04:12:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.457 04:12:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.457 rmmod nvme_tcp 00:20:58.457 rmmod nvme_fabrics 00:20:58.457 rmmod nvme_keyring 00:20:58.457 04:12:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.457 04:12:12 -- nvmf/common.sh@124 -- # set -e 00:20:58.457 04:12:12 -- nvmf/common.sh@125 -- # return 0 00:20:58.457 04:12:12 -- nvmf/common.sh@478 -- # '[' -n 3894983 ']' 00:20:58.457 04:12:12 -- nvmf/common.sh@479 -- # killprocess 3894983 00:20:58.457 04:12:12 -- common/autotest_common.sh@936 -- # '[' -z 3894983 ']' 00:20:58.457 04:12:12 -- common/autotest_common.sh@940 -- # kill -0 3894983 00:20:58.457 04:12:12 -- common/autotest_common.sh@941 -- # uname 00:20:58.457 04:12:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:58.457 04:12:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3894983 00:20:58.457 04:12:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:58.457 04:12:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:58.457 04:12:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3894983' 00:20:58.457 killing process with pid 3894983 00:20:58.457 04:12:12 -- common/autotest_common.sh@955 -- # kill 3894983 00:20:58.457 04:12:12 -- common/autotest_common.sh@960 -- # wait 3894983 00:20:59.834 04:12:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:59.834 04:12:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:59.834 04:12:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:59.834 04:12:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.834 04:12:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.834 04:12:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.834 04:12:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.834 04:12:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.371 04:12:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:02.371 00:21:02.371 real 0m25.423s 00:21:02.371 user 1m10.356s 00:21:02.371 sys 0m7.428s 00:21:02.371 04:12:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:02.371 04:12:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.371 ************************************ 00:21:02.371 END TEST nvmf_perf 00:21:02.371 ************************************ 00:21:02.371 04:12:16 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:02.371 04:12:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:02.371 04:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:02.371 04:12:16 -- common/autotest_common.sh@10 -- # set +x 00:21:02.371 ************************************ 00:21:02.371 START TEST nvmf_fio_host 00:21:02.371 ************************************ 00:21:02.371 04:12:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:02.371 * Looking for test storage... 00:21:02.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.371 04:12:16 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.371 04:12:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.371 04:12:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.371 04:12:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.371 04:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@5 -- # export PATH 00:21:02.371 04:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.371 04:12:16 -- nvmf/common.sh@7 -- # uname -s 00:21:02.371 04:12:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.371 04:12:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.371 04:12:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.371 04:12:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.371 04:12:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.371 04:12:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.371 04:12:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.371 04:12:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.371 04:12:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.371 04:12:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.371 04:12:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:02.371 04:12:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:02.371 04:12:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.371 04:12:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.371 04:12:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.371 04:12:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.371 04:12:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.371 04:12:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.371 04:12:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.371 04:12:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.371 04:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.371 04:12:16 -- paths/export.sh@5 -- # export PATH 00:21:02.372 04:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.372 04:12:16 -- nvmf/common.sh@47 -- # : 0 00:21:02.372 04:12:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.372 04:12:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.372 04:12:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.372 04:12:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.372 04:12:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.372 04:12:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.372 04:12:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.372 04:12:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.372 04:12:16 -- host/fio.sh@12 -- # nvmftestinit 00:21:02.372 04:12:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:02.372 04:12:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.372 04:12:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:02.372 04:12:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:02.372 04:12:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:02.372 04:12:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.372 04:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.372 04:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.372 04:12:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:02.372 04:12:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:02.372 04:12:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.372 04:12:16 -- common/autotest_common.sh@10 -- # set +x 00:21:07.670 04:12:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:07.670 04:12:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.670 04:12:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.670 04:12:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.670 04:12:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.670 04:12:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.670 04:12:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.670 04:12:21 -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.670 04:12:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.670 04:12:21 -- nvmf/common.sh@296 -- # e810=() 00:21:07.670 04:12:21 -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.670 04:12:21 -- nvmf/common.sh@297 -- # x722=() 00:21:07.670 04:12:21 -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.670 04:12:21 -- nvmf/common.sh@298 -- # mlx=() 00:21:07.670 04:12:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.670 04:12:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.670 04:12:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.670 04:12:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.670 04:12:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.670 04:12:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:07.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:07.670 04:12:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.670 04:12:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:07.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:07.670 04:12:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.670 04:12:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.670 04:12:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.670 04:12:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:07.670 Found net devices under 0000:af:00.0: cvl_0_0 00:21:07.670 04:12:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.670 04:12:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.670 04:12:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.670 04:12:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.670 04:12:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:07.670 Found net devices under 0000:af:00.1: cvl_0_1 00:21:07.670 04:12:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.670 04:12:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:07.670 04:12:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:07.670 04:12:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:07.670 04:12:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.670 04:12:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.670 04:12:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.670 04:12:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.670 04:12:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.670 04:12:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.670 04:12:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.670 04:12:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.670 04:12:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.670 04:12:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.670 04:12:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.670 04:12:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.670 04:12:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.670 04:12:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.670 04:12:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.670 04:12:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.670 04:12:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.929 04:12:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.929 04:12:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.929 04:12:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:07.929 00:21:07.929 --- 10.0.0.2 ping statistics --- 00:21:07.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.929 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:07.929 04:12:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:07.929 00:21:07.929 --- 10.0.0.1 ping statistics --- 00:21:07.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.929 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:07.929 04:12:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.929 04:12:22 -- nvmf/common.sh@411 -- # return 0 00:21:07.929 04:12:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:07.929 04:12:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.929 04:12:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:07.929 04:12:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:07.929 04:12:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.929 04:12:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:07.929 04:12:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:07.929 04:12:22 -- host/fio.sh@14 -- # [[ y != y ]] 00:21:07.929 04:12:22 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:07.929 04:12:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.929 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.929 04:12:22 -- host/fio.sh@22 -- # nvmfpid=3902304 00:21:07.929 04:12:22 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.929 04:12:22 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.929 04:12:22 -- host/fio.sh@26 -- # waitforlisten 3902304 00:21:07.929 04:12:22 -- common/autotest_common.sh@817 -- # '[' -z 3902304 ']' 00:21:07.929 04:12:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.929 04:12:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.929 04:12:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.929 04:12:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.929 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:21:07.929 [2024-04-19 04:12:22.346034] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:21:07.929 [2024-04-19 04:12:22.346087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.929 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.929 [2024-04-19 04:12:22.431006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.187 [2024-04-19 04:12:22.522442] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.187 [2024-04-19 04:12:22.522484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.187 [2024-04-19 04:12:22.522494] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.187 [2024-04-19 04:12:22.522503] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.187 [2024-04-19 04:12:22.522511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.187 [2024-04-19 04:12:22.522559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.187 [2024-04-19 04:12:22.522658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.187 [2024-04-19 04:12:22.522685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.187 [2024-04-19 04:12:22.522687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.137 04:12:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.137 04:12:23 -- common/autotest_common.sh@850 -- # return 0 00:21:09.137 04:12:23 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 [2024-04-19 04:12:23.296801] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.137 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.137 04:12:23 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:09.137 04:12:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 04:12:23 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 Malloc1 00:21:09.137 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.137 04:12:23 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.137 04:12:23 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.137 04:12:23 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.137 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.137 [2024-04-19 04:12:23.392908] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.137 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.137 04:12:23 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:09.137 04:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.138 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:21:09.138 04:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.138 04:12:23 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:09.138 04:12:23 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.138 04:12:23 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.138 04:12:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:09.138 04:12:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:09.138 04:12:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:09.138 04:12:23 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.138 04:12:23 -- common/autotest_common.sh@1327 -- # shift 00:21:09.138 04:12:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:09.138 04:12:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:09.138 04:12:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:09.138 04:12:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:09.138 04:12:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:09.138 04:12:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:09.138 04:12:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:09.138 04:12:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.397 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.397 fio-3.35 00:21:09.397 Starting 1 thread 00:21:09.397 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.936 00:21:11.936 test: (groupid=0, jobs=1): err= 0: pid=3902728: Fri Apr 19 04:12:26 2024 00:21:11.936 read: IOPS=7780, BW=30.4MiB/s (31.9MB/s)(61.0MiB/2008msec) 00:21:11.936 slat (usec): min=2, max=167, avg= 2.57, stdev= 1.91 00:21:11.936 clat (usec): min=2499, max=15015, avg=9007.80, stdev=746.43 00:21:11.936 lat (usec): min=2528, max=15018, avg=9010.37, stdev=746.24 00:21:11.936 clat percentiles (usec): 00:21:11.936 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8094], 20.00th=[ 8455], 00:21:11.936 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9241], 00:21:11.936 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:21:11.936 | 99.00th=[10552], 99.50th=[10683], 99.90th=[14091], 99.95th=[14746], 00:21:11.936 | 99.99th=[15008] 00:21:11.936 bw ( KiB/s): min=29840, max=31744, per=99.94%, avg=31104.00, stdev=873.41, samples=4 00:21:11.936 iops : min= 7460, max= 7936, avg=7776.00, stdev=218.35, samples=4 00:21:11.936 write: IOPS=7764, BW=30.3MiB/s (31.8MB/s)(60.9MiB/2008msec); 0 zone resets 00:21:11.936 slat (usec): min=2, max=154, avg= 2.68, stdev= 1.37 00:21:11.936 clat (usec): min=1795, max=14014, avg=7342.34, stdev=608.62 00:21:11.936 lat (usec): min=1810, max=14017, avg=7345.02, stdev=608.46 00:21:11.936 clat percentiles (usec): 00:21:11.936 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:21:11.936 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:21:11.936 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8225], 00:21:11.936 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11600], 99.95th=[12780], 00:21:11.936 | 99.99th=[13960] 00:21:11.937 bw ( KiB/s): min=30976, max=31232, per=100.00%, avg=31072.00, stdev=110.85, samples=4 00:21:11.937 iops : min= 7744, max= 7808, avg=7768.00, stdev=27.71, samples=4 00:21:11.937 lat (msec) : 2=0.03%, 4=0.12%, 10=96.17%, 20=3.69% 00:21:11.937 cpu : usr=72.85%, sys=24.27%, ctx=65, majf=0, minf=4 00:21:11.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:11.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:11.937 issued rwts: total=15623,15592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.937 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:11.937 00:21:11.937 Run status group 0 (all jobs): 00:21:11.937 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=61.0MiB (64.0MB), run=2008-2008msec 00:21:11.937 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.9MiB (63.9MB), run=2008-2008msec 00:21:11.937 04:12:26 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.937 04:12:26 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.937 04:12:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:11.937 04:12:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.937 04:12:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:11.937 04:12:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.937 04:12:26 -- common/autotest_common.sh@1327 -- # shift 00:21:11.937 04:12:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:11.937 04:12:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:11.937 04:12:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:11.937 04:12:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:11.937 04:12:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:11.937 04:12:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:11.937 04:12:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:11.937 04:12:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:12.195 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:12.195 fio-3.35 00:21:12.195 Starting 1 thread 00:21:12.195 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.726 00:21:14.726 test: (groupid=0, jobs=1): err= 0: pid=3903373: Fri Apr 19 04:12:28 2024 00:21:14.726 read: IOPS=8143, BW=127MiB/s (133MB/s)(255MiB/2006msec) 00:21:14.726 slat (usec): min=3, max=122, avg= 4.23, stdev= 1.62 00:21:14.726 clat (usec): min=2423, max=18482, avg=9298.56, stdev=2178.56 00:21:14.726 lat (usec): min=2427, max=18486, avg=9302.80, stdev=2178.64 00:21:14.726 clat percentiles (usec): 00:21:14.726 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7308], 00:21:14.726 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:21:14.726 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[12911], 00:21:14.726 | 99.00th=[14746], 99.50th=[15664], 99.90th=[17171], 99.95th=[17957], 00:21:14.726 | 99.99th=[18482] 00:21:14.726 bw ( KiB/s): min=57184, max=79968, per=50.80%, avg=66184.00, stdev=9871.41, samples=4 00:21:14.726 iops : min= 3574, max= 4998, avg=4136.50, stdev=616.96, samples=4 00:21:14.726 write: IOPS=4982, BW=77.8MiB/s (81.6MB/s)(136MiB/1746msec); 0 zone resets 00:21:14.726 slat (usec): min=45, max=373, avg=47.64, stdev= 7.25 00:21:14.726 clat (usec): min=2912, max=23391, avg=11166.70, stdev=2220.16 00:21:14.726 lat (usec): min=2958, max=23437, avg=11214.33, stdev=2220.38 00:21:14.726 clat percentiles (usec): 00:21:14.726 | 1.00th=[ 7373], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:21:14.726 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:21:14.726 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13960], 95.00th=[15533], 00:21:14.726 | 99.00th=[17433], 99.50th=[19792], 99.90th=[21890], 99.95th=[22152], 00:21:14.726 | 99.99th=[23462] 00:21:14.726 bw ( KiB/s): min=59104, max=82656, per=87.01%, avg=69360.00, stdev=9882.86, samples=4 00:21:14.727 iops : min= 3694, max= 5166, avg=4335.00, stdev=617.68, samples=4 00:21:14.727 lat (msec) : 4=0.16%, 10=51.48%, 20=48.22%, 50=0.14% 00:21:14.727 cpu : usr=88.43%, sys=10.17%, ctx=91, majf=0, minf=1 00:21:14.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:14.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:14.727 issued rwts: total=16335,8699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:14.727 00:21:14.727 Run status group 0 (all jobs): 00:21:14.727 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2006-2006msec 00:21:14.727 WRITE: bw=77.8MiB/s (81.6MB/s), 77.8MiB/s-77.8MiB/s (81.6MB/s-81.6MB/s), io=136MiB (143MB), run=1746-1746msec 00:21:14.727 04:12:28 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.727 04:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.727 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:14.727 04:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.727 04:12:28 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:21:14.727 04:12:28 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:14.727 04:12:28 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:14.727 04:12:28 -- host/fio.sh@84 -- # nvmftestfini 00:21:14.727 04:12:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:14.727 04:12:28 -- nvmf/common.sh@117 -- # sync 00:21:14.727 04:12:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.727 04:12:28 -- nvmf/common.sh@120 -- # set +e 00:21:14.727 04:12:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.727 04:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.727 rmmod nvme_tcp 00:21:14.727 rmmod nvme_fabrics 00:21:14.727 rmmod nvme_keyring 00:21:14.727 04:12:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.727 04:12:29 -- nvmf/common.sh@124 -- # set -e 00:21:14.727 04:12:29 -- nvmf/common.sh@125 -- # return 0 00:21:14.727 04:12:29 -- nvmf/common.sh@478 -- # '[' -n 3902304 ']' 00:21:14.727 04:12:29 -- nvmf/common.sh@479 -- # killprocess 3902304 00:21:14.727 04:12:29 -- common/autotest_common.sh@936 -- # '[' -z 3902304 ']' 00:21:14.727 04:12:29 -- common/autotest_common.sh@940 -- # kill -0 3902304 00:21:14.727 04:12:29 -- common/autotest_common.sh@941 -- # uname 00:21:14.727 04:12:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.727 04:12:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3902304 00:21:14.727 04:12:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:14.727 04:12:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:14.727 04:12:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3902304' 00:21:14.727 killing process with pid 3902304 00:21:14.727 04:12:29 -- common/autotest_common.sh@955 -- # kill 3902304 00:21:14.727 04:12:29 -- common/autotest_common.sh@960 -- # wait 3902304 00:21:14.986 04:12:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:14.986 04:12:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:14.986 04:12:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:14.986 04:12:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.986 04:12:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.986 04:12:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.986 04:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.986 04:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.892 04:12:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.892 00:21:16.892 real 0m14.898s 00:21:16.892 user 0m53.729s 00:21:16.892 sys 0m6.106s 00:21:16.892 04:12:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:16.892 04:12:31 -- common/autotest_common.sh@10 -- # set +x 00:21:16.892 ************************************ 00:21:16.892 END TEST nvmf_fio_host 00:21:16.892 ************************************ 00:21:17.151 04:12:31 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.151 04:12:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:17.151 04:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:17.151 04:12:31 -- common/autotest_common.sh@10 -- # set +x 00:21:17.151 ************************************ 00:21:17.151 START TEST nvmf_failover 00:21:17.151 ************************************ 00:21:17.151 04:12:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.151 * Looking for test storage... 00:21:17.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.151 04:12:31 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.151 04:12:31 -- nvmf/common.sh@7 -- # uname -s 00:21:17.151 04:12:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.151 04:12:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.151 04:12:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.151 04:12:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.151 04:12:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.151 04:12:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.151 04:12:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.151 04:12:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.151 04:12:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.151 04:12:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.151 04:12:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:17.151 04:12:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:17.151 04:12:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.151 04:12:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.151 04:12:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.151 04:12:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.151 04:12:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.152 04:12:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.152 04:12:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.152 04:12:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.411 04:12:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.411 04:12:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.411 04:12:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.411 04:12:31 -- paths/export.sh@5 -- # export PATH 00:21:17.411 04:12:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.411 04:12:31 -- nvmf/common.sh@47 -- # : 0 00:21:17.411 04:12:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.411 04:12:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.411 04:12:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.412 04:12:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.412 04:12:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.412 04:12:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.412 04:12:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.412 04:12:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.412 04:12:31 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.412 04:12:31 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.412 04:12:31 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.412 04:12:31 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.412 04:12:31 -- host/failover.sh@18 -- # nvmftestinit 00:21:17.412 04:12:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:17.412 04:12:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.412 04:12:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:17.412 04:12:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:17.412 04:12:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:17.412 04:12:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.412 04:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.412 04:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.412 04:12:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:17.412 04:12:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:17.412 04:12:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.412 04:12:31 -- common/autotest_common.sh@10 -- # set +x 00:21:22.689 04:12:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.689 04:12:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:22.689 04:12:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:22.689 04:12:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:22.689 04:12:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:22.689 04:12:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:22.689 04:12:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:22.689 04:12:37 -- nvmf/common.sh@295 -- # net_devs=() 00:21:22.689 04:12:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:22.689 04:12:37 -- nvmf/common.sh@296 -- # e810=() 00:21:22.689 04:12:37 -- nvmf/common.sh@296 -- # local -ga e810 00:21:22.689 04:12:37 -- nvmf/common.sh@297 -- # x722=() 00:21:22.690 04:12:37 -- nvmf/common.sh@297 -- # local -ga x722 00:21:22.690 04:12:37 -- nvmf/common.sh@298 -- # mlx=() 00:21:22.690 04:12:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:22.690 04:12:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.690 04:12:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:22.690 04:12:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:22.690 04:12:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.690 04:12:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:22.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:22.690 04:12:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:22.690 04:12:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:22.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:22.690 04:12:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.690 04:12:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.690 04:12:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.690 04:12:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:22.690 Found net devices under 0000:af:00.0: cvl_0_0 00:21:22.690 04:12:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.690 04:12:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:22.690 04:12:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.690 04:12:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.690 04:12:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:22.690 Found net devices under 0000:af:00.1: cvl_0_1 00:21:22.690 04:12:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.690 04:12:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:22.690 04:12:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:22.690 04:12:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:22.690 04:12:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.690 04:12:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.690 04:12:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.690 04:12:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:22.690 04:12:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.690 04:12:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.690 04:12:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:22.690 04:12:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.690 04:12:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.690 04:12:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:22.690 04:12:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:22.690 04:12:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.690 04:12:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.690 04:12:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.690 04:12:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.690 04:12:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:22.690 04:12:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.949 04:12:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.949 04:12:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.949 04:12:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:22.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:21:22.949 00:21:22.949 --- 10.0.0.2 ping statistics --- 00:21:22.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.949 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:22.949 04:12:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:21:22.949 00:21:22.949 --- 10.0.0.1 ping statistics --- 00:21:22.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.949 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:22.949 04:12:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.949 04:12:37 -- nvmf/common.sh@411 -- # return 0 00:21:22.949 04:12:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:22.949 04:12:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.949 04:12:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:22.949 04:12:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:22.949 04:12:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.949 04:12:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:22.949 04:12:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:22.949 04:12:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:22.949 04:12:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:22.949 04:12:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:22.949 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.949 04:12:37 -- nvmf/common.sh@470 -- # nvmfpid=3907359 00:21:22.949 04:12:37 -- nvmf/common.sh@471 -- # waitforlisten 3907359 00:21:22.949 04:12:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:22.949 04:12:37 -- common/autotest_common.sh@817 -- # '[' -z 3907359 ']' 00:21:22.949 04:12:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.949 04:12:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:22.949 04:12:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.949 04:12:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:22.949 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:22.949 [2024-04-19 04:12:37.423537] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:21:22.949 [2024-04-19 04:12:37.423590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.209 [2024-04-19 04:12:37.500578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:23.209 [2024-04-19 04:12:37.590568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.209 [2024-04-19 04:12:37.590611] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.209 [2024-04-19 04:12:37.590621] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.209 [2024-04-19 04:12:37.590629] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.209 [2024-04-19 04:12:37.590637] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.209 [2024-04-19 04:12:37.590742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.209 [2024-04-19 04:12:37.590867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.209 [2024-04-19 04:12:37.590868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.209 04:12:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.209 04:12:37 -- common/autotest_common.sh@850 -- # return 0 00:21:23.209 04:12:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.209 04:12:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.209 04:12:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.209 04:12:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.209 04:12:37 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:23.468 [2024-04-19 04:12:37.952661] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.468 04:12:37 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:23.726 Malloc0 00:21:23.985 04:12:38 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.985 04:12:38 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:24.244 04:12:38 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.244 [2024-04-19 04:12:38.760955] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.503 04:12:38 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:24.503 [2024-04-19 04:12:39.013752] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:24.762 04:12:39 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:24.762 [2024-04-19 04:12:39.262646] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:25.021 04:12:39 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:25.021 04:12:39 -- host/failover.sh@31 -- # bdevperf_pid=3907655 00:21:25.021 04:12:39 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:25.021 04:12:39 -- host/failover.sh@34 -- # waitforlisten 3907655 /var/tmp/bdevperf.sock 00:21:25.021 04:12:39 -- common/autotest_common.sh@817 -- # '[' -z 3907655 ']' 00:21:25.021 04:12:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.021 04:12:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.021 04:12:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.021 04:12:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.021 04:12:39 -- common/autotest_common.sh@10 -- # set +x 00:21:25.973 04:12:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.973 04:12:40 -- common/autotest_common.sh@850 -- # return 0 00:21:25.973 04:12:40 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.243 NVMe0n1 00:21:26.243 04:12:40 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.502 00:21:26.502 04:12:40 -- host/failover.sh@39 -- # run_test_pid=3907921 00:21:26.502 04:12:40 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.502 04:12:40 -- host/failover.sh@41 -- # sleep 1 00:21:27.879 04:12:41 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.879 [2024-04-19 04:12:42.212954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.879 [2024-04-19 04:12:42.213080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 [2024-04-19 04:12:42.213161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b41870 is same with the state(5) to be set 00:21:27.880 04:12:42 -- host/failover.sh@45 -- # sleep 3 00:21:31.166 04:12:45 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.166 00:21:31.166 04:12:45 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.425 [2024-04-19 04:12:45.864871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 [2024-04-19 04:12:45.864966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b42060 is same with the state(5) to be set 00:21:31.425 04:12:45 -- host/failover.sh@50 -- # sleep 3 00:21:34.715 04:12:48 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.715 [2024-04-19 04:12:49.122758] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.715 04:12:49 -- host/failover.sh@55 -- # sleep 1 00:21:35.653 04:12:50 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:35.912 [2024-04-19 04:12:50.391132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.912 [2024-04-19 04:12:50.391427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 [2024-04-19 04:12:50.391560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e7a90 is same with the state(5) to be set 00:21:35.913 04:12:50 -- host/failover.sh@59 -- # wait 3907921 00:21:42.484 0 00:21:42.484 04:12:56 -- host/failover.sh@61 -- # killprocess 3907655 00:21:42.484 04:12:56 -- common/autotest_common.sh@936 -- # '[' -z 3907655 ']' 00:21:42.484 04:12:56 -- common/autotest_common.sh@940 -- # kill -0 3907655 00:21:42.484 04:12:56 -- common/autotest_common.sh@941 -- # uname 00:21:42.484 04:12:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.484 04:12:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3907655 00:21:42.484 04:12:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:42.484 04:12:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:42.484 04:12:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3907655' 00:21:42.484 killing process with pid 3907655 00:21:42.484 04:12:56 -- common/autotest_common.sh@955 -- # kill 3907655 00:21:42.484 04:12:56 -- common/autotest_common.sh@960 -- # wait 3907655 00:21:42.484 04:12:56 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.484 [2024-04-19 04:12:39.333608] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:21:42.484 [2024-04-19 04:12:39.333678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907655 ] 00:21:42.484 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.484 [2024-04-19 04:12:39.414267] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.484 [2024-04-19 04:12:39.501054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.484 Running I/O for 15 seconds... 00:21:42.484 [2024-04-19 04:12:42.213567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.484 [2024-04-19 04:12:42.213608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.484 [2024-04-19 04:12:42.213633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.484 [2024-04-19 04:12:42.213654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.484 [2024-04-19 04:12:42.213673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd4840 is same with the state(5) to be set 00:21:42.484 [2024-04-19 04:12:42.213744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.213984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.213996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.484 [2024-04-19 04:12:42.214306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.484 [2024-04-19 04:12:42.214317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.485 [2024-04-19 04:12:42.214722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.214979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.214992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.485 [2024-04-19 04:12:42.215275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.485 [2024-04-19 04:12:42.215284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.486 [2024-04-19 04:12:42.215921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.486 [2024-04-19 04:12:42.216251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.486 [2024-04-19 04:12:42.216263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:42.216514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.487 [2024-04-19 04:12:42.216546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.487 [2024-04-19 04:12:42.216554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:21:42.487 [2024-04-19 04:12:42.216564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:42.216611] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bf33c0 was disconnected and freed. reset controller. 00:21:42.487 [2024-04-19 04:12:42.216623] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:42.487 [2024-04-19 04:12:42.216632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.487 [2024-04-19 04:12:42.220852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.487 [2024-04-19 04:12:42.220885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4840 (9): Bad file descriptor 00:21:42.487 [2024-04-19 04:12:42.389851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.487 [2024-04-19 04:12:45.864818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.487 [2024-04-19 04:12:45.864863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.864876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.487 [2024-04-19 04:12:45.864892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.864903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.487 [2024-04-19 04:12:45.864913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.864923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.487 [2024-04-19 04:12:45.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.864944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd4840 is same with the state(5) to be set 00:21:42.487 [2024-04-19 04:12:45.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.487 [2024-04-19 04:12:45.869474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.487 [2024-04-19 04:12:45.869834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.487 [2024-04-19 04:12:45.869846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.869985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.869997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.488 [2024-04-19 04:12:45.870575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.488 [2024-04-19 04:12:45.870612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70520 len:8 PRP1 0x0 PRP2 0x0 00:21:42.488 [2024-04-19 04:12:45.870622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.488 [2024-04-19 04:12:45.870642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.488 [2024-04-19 04:12:45.870651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70528 len:8 PRP1 0x0 PRP2 0x0 00:21:42.488 [2024-04-19 04:12:45.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.488 [2024-04-19 04:12:45.870673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.488 [2024-04-19 04:12:45.870680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.488 [2024-04-19 04:12:45.870688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70536 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70544 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70552 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70560 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70568 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70576 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70584 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70592 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70600 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.870974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.870984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.870991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.870999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70608 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70616 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70624 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70632 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70640 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70656 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70664 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70672 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70696 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70720 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70728 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.489 [2024-04-19 04:12:45.871551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.489 [2024-04-19 04:12:45.871559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.489 [2024-04-19 04:12:45.871567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:21:42.489 [2024-04-19 04:12:45.871576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70760 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70768 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70776 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70784 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70792 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.871973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.871980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.871989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.871999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70880 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70888 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70896 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70904 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70912 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70920 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70928 len:8 PRP1 0x0 PRP2 0x0 00:21:42.490 [2024-04-19 04:12:45.872410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.490 [2024-04-19 04:12:45.872420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.490 [2024-04-19 04:12:45.872427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.490 [2024-04-19 04:12:45.872435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.872443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.872454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.872461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.872469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70944 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.872479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.872489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.872496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.872504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70952 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.872514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.872523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.872530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.872538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70960 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.872549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.872559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.872566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.872574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70968 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.872583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.872593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.872600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.872608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70976 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70984 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70992 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71000 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71008 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71016 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71024 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71032 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.882972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.882980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71040 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.882989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.882999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.883006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.883013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71048 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.883022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.883032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.883039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.883047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.883056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.883066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.883073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.883081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71064 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.883090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.883100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.491 [2024-04-19 04:12:45.883107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.491 [2024-04-19 04:12:45.883114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:21:42.491 [2024-04-19 04:12:45.883124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:45.883189] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be0dd0 was disconnected and freed. reset controller. 00:21:42.491 [2024-04-19 04:12:45.883204] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:42.491 [2024-04-19 04:12:45.883217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.491 [2024-04-19 04:12:45.883268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4840 (9): Bad file descriptor 00:21:42.491 [2024-04-19 04:12:45.889028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.491 [2024-04-19 04:12:45.973664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.491 [2024-04-19 04:12:50.392910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.392951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.392971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.392982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.392995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.491 [2024-04-19 04:12:50.393216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.491 [2024-04-19 04:12:50.393226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.492 [2024-04-19 04:12:50.393974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.393986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.492 [2024-04-19 04:12:50.393996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.394008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.492 [2024-04-19 04:12:50.394018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.394030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.492 [2024-04-19 04:12:50.394042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.492 [2024-04-19 04:12:50.394054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.492 [2024-04-19 04:12:50.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.493 [2024-04-19 04:12:50.394939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.493 [2024-04-19 04:12:50.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.494 [2024-04-19 04:12:50.394961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.394972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.494 [2024-04-19 04:12:50.394982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.394994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.494 [2024-04-19 04:12:50.395003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.494 [2024-04-19 04:12:50.395024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85560 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85568 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85576 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85584 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85592 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.494 [2024-04-19 04:12:50.395702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.494 [2024-04-19 04:12:50.395710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:21:42.494 [2024-04-19 04:12:50.395721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.494 [2024-04-19 04:12:50.395731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.395970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.395977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.395986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.395995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.396005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.396012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.396020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.396029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.396038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.396045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.396053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.396074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.396081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.396089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.396098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.396107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.396114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.396122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85800 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.396131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.396141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.396148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.396156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85160 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.405893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:42.495 [2024-04-19 04:12:50.405908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:42.495 [2024-04-19 04:12:50.405921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85168 len:8 PRP1 0x0 PRP2 0x0 00:21:42.495 [2024-04-19 04:12:50.405933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.405990] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c01340 was disconnected and freed. reset controller. 00:21:42.495 [2024-04-19 04:12:50.406005] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:42.495 [2024-04-19 04:12:50.406037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.495 [2024-04-19 04:12:50.406052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.406070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.495 [2024-04-19 04:12:50.406083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.406097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.495 [2024-04-19 04:12:50.406110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.406124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.495 [2024-04-19 04:12:50.406137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.495 [2024-04-19 04:12:50.406149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.495 [2024-04-19 04:12:50.406185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd4840 (9): Bad file descriptor 00:21:42.495 [2024-04-19 04:12:50.411989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.495 [2024-04-19 04:12:50.454262] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.495 00:21:42.495 Latency(us) 00:21:42.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.495 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:42.495 Verification LBA range: start 0x0 length 0x4000 00:21:42.495 NVMe0n1 : 15.01 7342.84 28.68 603.31 0.00 16074.30 845.27 26333.56 00:21:42.495 =================================================================================================================== 00:21:42.495 Total : 7342.84 28.68 603.31 0.00 16074.30 845.27 26333.56 00:21:42.495 Received shutdown signal, test time was about 15.000000 seconds 00:21:42.495 00:21:42.495 Latency(us) 00:21:42.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.495 =================================================================================================================== 00:21:42.495 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.495 04:12:56 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:42.495 04:12:56 -- host/failover.sh@65 -- # count=3 00:21:42.495 04:12:56 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:42.495 04:12:56 -- host/failover.sh@73 -- # bdevperf_pid=3910805 00:21:42.495 04:12:56 -- host/failover.sh@75 -- # waitforlisten 3910805 /var/tmp/bdevperf.sock 00:21:42.495 04:12:56 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:42.495 04:12:56 -- common/autotest_common.sh@817 -- # '[' -z 3910805 ']' 00:21:42.495 04:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.495 04:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.496 04:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.496 04:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.496 04:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:42.496 04:12:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.496 04:12:56 -- common/autotest_common.sh@850 -- # return 0 00:21:42.496 04:12:56 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:42.496 [2024-04-19 04:12:56.981961] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:42.755 04:12:57 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:42.755 [2024-04-19 04:12:57.230786] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:42.755 04:12:57 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.323 NVMe0n1 00:21:43.323 04:12:57 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.891 00:21:43.891 04:12:58 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.150 00:21:44.410 04:12:58 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.410 04:12:58 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:44.410 04:12:58 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.669 04:12:59 -- host/failover.sh@87 -- # sleep 3 00:21:47.957 04:13:02 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.957 04:13:02 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:47.957 04:13:02 -- host/failover.sh@90 -- # run_test_pid=3911861 00:21:47.957 04:13:02 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.957 04:13:02 -- host/failover.sh@92 -- # wait 3911861 00:21:49.337 0 00:21:49.337 04:13:03 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:49.337 [2024-04-19 04:12:56.495710] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:21:49.337 [2024-04-19 04:12:56.495776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910805 ] 00:21:49.337 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.337 [2024-04-19 04:12:56.575537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.337 [2024-04-19 04:12:56.656591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.337 [2024-04-19 04:12:59.154609] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:49.337 [2024-04-19 04:12:59.154663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.337 [2024-04-19 04:12:59.154677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.337 [2024-04-19 04:12:59.154690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.337 [2024-04-19 04:12:59.154700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.337 [2024-04-19 04:12:59.154711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.337 [2024-04-19 04:12:59.154720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.337 [2024-04-19 04:12:59.154731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.337 [2024-04-19 04:12:59.154741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.337 [2024-04-19 04:12:59.154750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.337 [2024-04-19 04:12:59.154785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.337 [2024-04-19 04:12:59.154804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b9840 (9): Bad file descriptor 00:21:49.337 [2024-04-19 04:12:59.205703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.337 Running I/O for 1 seconds... 00:21:49.337 00:21:49.337 Latency(us) 00:21:49.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.337 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.337 Verification LBA range: start 0x0 length 0x4000 00:21:49.337 NVMe0n1 : 1.01 7382.88 28.84 0.00 0.00 17252.25 1854.37 14060.45 00:21:49.337 =================================================================================================================== 00:21:49.337 Total : 7382.88 28.84 0.00 0.00 17252.25 1854.37 14060.45 00:21:49.337 04:13:03 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:49.337 04:13:03 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:49.337 04:13:03 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.596 04:13:04 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:49.596 04:13:04 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:49.854 04:13:04 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.114 04:13:04 -- host/failover.sh@101 -- # sleep 3 00:21:53.437 04:13:07 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.437 04:13:07 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:53.437 04:13:07 -- host/failover.sh@108 -- # killprocess 3910805 00:21:53.437 04:13:07 -- common/autotest_common.sh@936 -- # '[' -z 3910805 ']' 00:21:53.437 04:13:07 -- common/autotest_common.sh@940 -- # kill -0 3910805 00:21:53.437 04:13:07 -- common/autotest_common.sh@941 -- # uname 00:21:53.437 04:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.437 04:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3910805 00:21:53.437 04:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:53.437 04:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:53.437 04:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3910805' 00:21:53.437 killing process with pid 3910805 00:21:53.437 04:13:07 -- common/autotest_common.sh@955 -- # kill 3910805 00:21:53.437 04:13:07 -- common/autotest_common.sh@960 -- # wait 3910805 00:21:53.714 04:13:08 -- host/failover.sh@110 -- # sync 00:21:53.714 04:13:08 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.973 04:13:08 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:53.973 04:13:08 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:53.973 04:13:08 -- host/failover.sh@116 -- # nvmftestfini 00:21:53.973 04:13:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:53.973 04:13:08 -- nvmf/common.sh@117 -- # sync 00:21:53.973 04:13:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.973 04:13:08 -- nvmf/common.sh@120 -- # set +e 00:21:53.973 04:13:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.973 04:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.973 rmmod nvme_tcp 00:21:53.973 rmmod nvme_fabrics 00:21:53.973 rmmod nvme_keyring 00:21:53.973 04:13:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.973 04:13:08 -- nvmf/common.sh@124 -- # set -e 00:21:53.973 04:13:08 -- nvmf/common.sh@125 -- # return 0 00:21:53.973 04:13:08 -- nvmf/common.sh@478 -- # '[' -n 3907359 ']' 00:21:53.973 04:13:08 -- nvmf/common.sh@479 -- # killprocess 3907359 00:21:53.973 04:13:08 -- common/autotest_common.sh@936 -- # '[' -z 3907359 ']' 00:21:53.973 04:13:08 -- common/autotest_common.sh@940 -- # kill -0 3907359 00:21:53.973 04:13:08 -- common/autotest_common.sh@941 -- # uname 00:21:53.973 04:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.973 04:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3907359 00:21:53.973 04:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:54.232 04:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:54.232 04:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3907359' 00:21:54.232 killing process with pid 3907359 00:21:54.232 04:13:08 -- common/autotest_common.sh@955 -- # kill 3907359 00:21:54.232 04:13:08 -- common/autotest_common.sh@960 -- # wait 3907359 00:21:54.232 04:13:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:54.232 04:13:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:54.232 04:13:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:54.232 04:13:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.232 04:13:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.232 04:13:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.232 04:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.232 04:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.770 04:13:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.770 00:21:56.770 real 0m39.249s 00:21:56.770 user 2m7.965s 00:21:56.770 sys 0m7.576s 00:21:56.770 04:13:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:56.770 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.770 ************************************ 00:21:56.770 END TEST nvmf_failover 00:21:56.770 ************************************ 00:21:56.770 04:13:10 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:56.770 04:13:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:56.770 04:13:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:56.770 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.770 ************************************ 00:21:56.770 START TEST nvmf_discovery 00:21:56.770 ************************************ 00:21:56.770 04:13:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:56.770 * Looking for test storage... 00:21:56.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.770 04:13:11 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.770 04:13:11 -- nvmf/common.sh@7 -- # uname -s 00:21:56.770 04:13:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.770 04:13:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.770 04:13:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.770 04:13:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.770 04:13:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.770 04:13:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.770 04:13:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.770 04:13:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.770 04:13:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.770 04:13:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.770 04:13:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:56.770 04:13:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:56.770 04:13:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.770 04:13:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.770 04:13:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.770 04:13:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.770 04:13:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.770 04:13:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.770 04:13:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.770 04:13:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.770 04:13:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.770 04:13:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.770 04:13:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.770 04:13:11 -- paths/export.sh@5 -- # export PATH 00:21:56.770 04:13:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.770 04:13:11 -- nvmf/common.sh@47 -- # : 0 00:21:56.770 04:13:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.770 04:13:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.770 04:13:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.770 04:13:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.770 04:13:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.770 04:13:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.770 04:13:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.770 04:13:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.770 04:13:11 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:56.770 04:13:11 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:56.770 04:13:11 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:56.770 04:13:11 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:56.770 04:13:11 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:56.770 04:13:11 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:56.770 04:13:11 -- host/discovery.sh@25 -- # nvmftestinit 00:21:56.770 04:13:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:56.770 04:13:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.770 04:13:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:56.770 04:13:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:56.770 04:13:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:56.770 04:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.770 04:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.770 04:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.770 04:13:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:56.770 04:13:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:56.770 04:13:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.770 04:13:11 -- common/autotest_common.sh@10 -- # set +x 00:22:02.041 04:13:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:02.041 04:13:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.041 04:13:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.041 04:13:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.041 04:13:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.041 04:13:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.041 04:13:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.041 04:13:16 -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.041 04:13:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.041 04:13:16 -- nvmf/common.sh@296 -- # e810=() 00:22:02.041 04:13:16 -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.041 04:13:16 -- nvmf/common.sh@297 -- # x722=() 00:22:02.041 04:13:16 -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.041 04:13:16 -- nvmf/common.sh@298 -- # mlx=() 00:22:02.041 04:13:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.041 04:13:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.041 04:13:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.041 04:13:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.041 04:13:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.041 04:13:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.041 04:13:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:02.041 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:02.041 04:13:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.041 04:13:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:02.041 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:02.041 04:13:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.041 04:13:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.041 04:13:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.041 04:13:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.041 04:13:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.041 04:13:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.042 04:13:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:02.042 Found net devices under 0000:af:00.0: cvl_0_0 00:22:02.042 04:13:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.042 04:13:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.042 04:13:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.042 04:13:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:02.042 04:13:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.042 04:13:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:02.042 Found net devices under 0000:af:00.1: cvl_0_1 00:22:02.042 04:13:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.042 04:13:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:02.042 04:13:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:02.042 04:13:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:02.042 04:13:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:02.042 04:13:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:02.042 04:13:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.042 04:13:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.042 04:13:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.042 04:13:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.042 04:13:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.042 04:13:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.042 04:13:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.042 04:13:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.042 04:13:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.042 04:13:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.042 04:13:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.042 04:13:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.042 04:13:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.301 04:13:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.301 04:13:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.301 04:13:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.301 04:13:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.301 04:13:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.301 04:13:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.301 04:13:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:22:02.301 00:22:02.301 --- 10.0.0.2 ping statistics --- 00:22:02.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.301 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:02.301 04:13:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:22:02.301 00:22:02.301 --- 10.0.0.1 ping statistics --- 00:22:02.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.301 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:02.301 04:13:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.301 04:13:16 -- nvmf/common.sh@411 -- # return 0 00:22:02.301 04:13:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:02.301 04:13:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.301 04:13:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:02.301 04:13:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:02.301 04:13:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.301 04:13:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:02.301 04:13:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:02.560 04:13:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:02.560 04:13:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:02.560 04:13:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:02.560 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.560 04:13:16 -- nvmf/common.sh@470 -- # nvmfpid=3916448 00:22:02.560 04:13:16 -- nvmf/common.sh@471 -- # waitforlisten 3916448 00:22:02.560 04:13:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.560 04:13:16 -- common/autotest_common.sh@817 -- # '[' -z 3916448 ']' 00:22:02.560 04:13:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.560 04:13:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:02.560 04:13:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.560 04:13:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:02.560 04:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:02.560 [2024-04-19 04:13:16.909380] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:02.560 [2024-04-19 04:13:16.909437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.560 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.560 [2024-04-19 04:13:16.987185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.560 [2024-04-19 04:13:17.076742] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.560 [2024-04-19 04:13:17.076784] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.560 [2024-04-19 04:13:17.076796] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.560 [2024-04-19 04:13:17.076804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.560 [2024-04-19 04:13:17.076812] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.560 [2024-04-19 04:13:17.076839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.820 04:13:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:02.820 04:13:17 -- common/autotest_common.sh@850 -- # return 0 00:22:02.820 04:13:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:02.820 04:13:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 04:13:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.820 04:13:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.820 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 [2024-04-19 04:13:17.220796] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.820 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.820 04:13:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:02.820 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 [2024-04-19 04:13:17.232959] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:02.820 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.820 04:13:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:02.820 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 null0 00:22:02.820 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.820 04:13:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:02.820 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 null1 00:22:02.820 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.820 04:13:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:02.820 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:02.820 04:13:17 -- host/discovery.sh@45 -- # hostpid=3916674 00:22:02.820 04:13:17 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:02.820 04:13:17 -- host/discovery.sh@46 -- # waitforlisten 3916674 /tmp/host.sock 00:22:02.820 04:13:17 -- common/autotest_common.sh@817 -- # '[' -z 3916674 ']' 00:22:02.820 04:13:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:02.820 04:13:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:02.820 04:13:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:02.820 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:02.820 04:13:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:02.820 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 [2024-04-19 04:13:17.312143] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:02.821 [2024-04-19 04:13:17.312196] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916674 ] 00:22:02.821 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.081 [2024-04-19 04:13:17.392232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.081 [2024-04-19 04:13:17.481614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.081 04:13:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:03.081 04:13:17 -- common/autotest_common.sh@850 -- # return 0 00:22:03.081 04:13:17 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.081 04:13:17 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:03.081 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.081 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.081 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.081 04:13:17 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:03.081 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.081 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.081 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.081 04:13:17 -- host/discovery.sh@72 -- # notify_id=0 00:22:03.081 04:13:17 -- host/discovery.sh@83 -- # get_subsystem_names 00:22:03.081 04:13:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.081 04:13:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.081 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.081 04:13:17 -- host/discovery.sh@59 -- # sort 00:22:03.081 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.081 04:13:17 -- host/discovery.sh@59 -- # xargs 00:22:03.340 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.340 04:13:17 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:03.340 04:13:17 -- host/discovery.sh@84 -- # get_bdev_list 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # sort 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # xargs 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.341 04:13:17 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:03.341 04:13:17 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.341 04:13:17 -- host/discovery.sh@87 -- # get_subsystem_names 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # sort 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # xargs 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.341 04:13:17 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:03.341 04:13:17 -- host/discovery.sh@88 -- # get_bdev_list 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # xargs 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- host/discovery.sh@55 -- # sort 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.341 04:13:17 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:03.341 04:13:17 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.341 04:13:17 -- host/discovery.sh@91 -- # get_subsystem_names 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.341 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # sort 00:22:03.341 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.341 04:13:17 -- host/discovery.sh@59 -- # xargs 00:22:03.341 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:17 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:03.601 04:13:17 -- host/discovery.sh@92 -- # get_bdev_list 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # sort 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.601 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # xargs 00:22:03.601 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:17 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:03.601 04:13:17 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.601 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 [2024-04-19 04:13:17.938850] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.601 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:17 -- host/discovery.sh@97 -- # get_subsystem_names 00:22:03.601 04:13:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.601 04:13:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.601 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:17 -- host/discovery.sh@59 -- # sort 00:22:03.601 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 04:13:17 -- host/discovery.sh@59 -- # xargs 00:22:03.601 04:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:17 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:03.601 04:13:17 -- host/discovery.sh@98 -- # get_bdev_list 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.601 04:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # sort 00:22:03.601 04:13:17 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 04:13:17 -- host/discovery.sh@55 -- # xargs 00:22:03.601 04:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:18 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:03.601 04:13:18 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:03.601 04:13:18 -- host/discovery.sh@79 -- # expected_count=0 00:22:03.601 04:13:18 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:03.601 04:13:18 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:03.601 04:13:18 -- common/autotest_common.sh@901 -- # local max=10 00:22:03.601 04:13:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:03.601 04:13:18 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:03.601 04:13:18 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:03.601 04:13:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:03.601 04:13:18 -- host/discovery.sh@74 -- # jq '. | length' 00:22:03.601 04:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:18 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 04:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:18 -- host/discovery.sh@74 -- # notification_count=0 00:22:03.601 04:13:18 -- host/discovery.sh@75 -- # notify_id=0 00:22:03.601 04:13:18 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:03.601 04:13:18 -- common/autotest_common.sh@904 -- # return 0 00:22:03.601 04:13:18 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:03.601 04:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:18 -- common/autotest_common.sh@10 -- # set +x 00:22:03.601 04:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.601 04:13:18 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:03.601 04:13:18 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:03.601 04:13:18 -- common/autotest_common.sh@901 -- # local max=10 00:22:03.601 04:13:18 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:03.601 04:13:18 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:03.601 04:13:18 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:03.601 04:13:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.601 04:13:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.601 04:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:03.601 04:13:18 -- host/discovery.sh@59 -- # sort 00:22:03.602 04:13:18 -- common/autotest_common.sh@10 -- # set +x 00:22:03.602 04:13:18 -- host/discovery.sh@59 -- # xargs 00:22:03.602 04:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:03.861 04:13:18 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:22:03.861 04:13:18 -- common/autotest_common.sh@906 -- # sleep 1 00:22:04.430 [2024-04-19 04:13:18.652189] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.430 [2024-04-19 04:13:18.652212] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.430 [2024-04-19 04:13:18.652229] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.430 [2024-04-19 04:13:18.781682] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:04.430 [2024-04-19 04:13:18.883612] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.430 [2024-04-19 04:13:18.883635] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:04.690 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:04.690 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:04.690 04:13:19 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:04.690 04:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.690 04:13:19 -- host/discovery.sh@59 -- # xargs 00:22:04.690 04:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.690 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.690 04:13:19 -- host/discovery.sh@59 -- # sort 00:22:04.690 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.690 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.690 04:13:19 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.690 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:04.690 04:13:19 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:04.690 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:04.690 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:04.690 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:04.690 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:04.690 04:13:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # xargs 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.949 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # sort 00:22:04.949 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.949 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:04.949 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:04.949 04:13:19 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:04.949 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:04.949 04:13:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:04.949 04:13:19 -- host/discovery.sh@63 -- # xargs 00:22:04.949 04:13:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:04.949 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.949 04:13:19 -- host/discovery.sh@63 -- # sort -n 00:22:04.949 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.949 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:22:04.949 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:04.949 04:13:19 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:04.949 04:13:19 -- host/discovery.sh@79 -- # expected_count=1 00:22:04.949 04:13:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.949 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.949 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:04.949 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:04.949 04:13:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:04.949 04:13:19 -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.949 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.949 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.949 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.949 04:13:19 -- host/discovery.sh@74 -- # notification_count=1 00:22:04.949 04:13:19 -- host/discovery.sh@75 -- # notify_id=1 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:04.949 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:04.949 04:13:19 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:04.949 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.949 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.949 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.949 04:13:19 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:04.949 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:04.949 04:13:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.949 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.949 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # sort 00:22:04.949 04:13:19 -- host/discovery.sh@55 -- # xargs 00:22:05.209 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.209 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:05.209 04:13:19 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:05.209 04:13:19 -- host/discovery.sh@79 -- # expected_count=1 00:22:05.209 04:13:19 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.209 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.209 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:05.209 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:05.209 04:13:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:05.209 04:13:19 -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.209 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.209 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.209 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.209 04:13:19 -- host/discovery.sh@74 -- # notification_count=1 00:22:05.209 04:13:19 -- host/discovery.sh@75 -- # notify_id=2 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:05.209 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:05.209 04:13:19 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:05.209 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.209 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.209 [2024-04-19 04:13:19.627781] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.209 [2024-04-19 04:13:19.627965] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:05.209 [2024-04-19 04:13:19.627994] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.209 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.209 04:13:19 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.209 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.209 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:05.209 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:05.209 04:13:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.209 04:13:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.209 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.209 04:13:19 -- host/discovery.sh@59 -- # sort 00:22:05.209 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.209 04:13:19 -- host/discovery.sh@59 -- # xargs 00:22:05.209 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.209 04:13:19 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.210 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:05.210 04:13:19 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.210 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.210 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:05.210 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:05.210 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.210 04:13:19 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:05.210 04:13:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.210 04:13:19 -- host/discovery.sh@55 -- # xargs 00:22:05.210 04:13:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.210 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.210 04:13:19 -- host/discovery.sh@55 -- # sort 00:22:05.210 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.210 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.469 04:13:19 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.469 04:13:19 -- common/autotest_common.sh@904 -- # return 0 00:22:05.469 04:13:19 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.469 04:13:19 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.469 04:13:19 -- common/autotest_common.sh@901 -- # local max=10 00:22:05.469 04:13:19 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:05.469 04:13:19 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:05.469 04:13:19 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:05.469 04:13:19 -- host/discovery.sh@63 -- # sort -n 00:22:05.469 04:13:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.469 04:13:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.469 04:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.469 04:13:19 -- host/discovery.sh@63 -- # xargs 00:22:05.469 04:13:19 -- common/autotest_common.sh@10 -- # set +x 00:22:05.469 04:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.469 [2024-04-19 04:13:19.754850] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:05.469 04:13:19 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:05.469 04:13:19 -- common/autotest_common.sh@906 -- # sleep 1 00:22:05.728 [2024-04-19 04:13:20.056264] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.728 [2024-04-19 04:13:20.056290] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.728 [2024-04-19 04:13:20.056298] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.297 04:13:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.297 04:13:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:06.297 04:13:20 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:06.297 04:13:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.297 04:13:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.297 04:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.297 04:13:20 -- host/discovery.sh@63 -- # sort -n 00:22:06.297 04:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.297 04:13:20 -- host/discovery.sh@63 -- # xargs 00:22:06.297 04:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:06.557 04:13:20 -- common/autotest_common.sh@904 -- # return 0 00:22:06.557 04:13:20 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:06.557 04:13:20 -- host/discovery.sh@79 -- # expected_count=0 00:22:06.557 04:13:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.557 04:13:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.557 04:13:20 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.557 04:13:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:06.557 04:13:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.557 04:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.557 04:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.557 04:13:20 -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.557 04:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.557 04:13:20 -- host/discovery.sh@74 -- # notification_count=0 00:22:06.557 04:13:20 -- host/discovery.sh@75 -- # notify_id=2 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:06.557 04:13:20 -- common/autotest_common.sh@904 -- # return 0 00:22:06.557 04:13:20 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:06.557 04:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.557 04:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.557 [2024-04-19 04:13:20.895953] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:06.557 [2024-04-19 04:13:20.895983] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:06.557 04:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.557 04:13:20 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.557 04:13:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.557 04:13:20 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.557 04:13:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:06.557 [2024-04-19 04:13:20.904068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.557 [2024-04-19 04:13:20.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.557 [2024-04-19 04:13:20.904106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.557 [2024-04-19 04:13:20.904116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.557 [2024-04-19 04:13:20.904127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.557 [2024-04-19 04:13:20.904137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.557 [2024-04-19 04:13:20.904147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.557 [2024-04-19 04:13:20.904157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.557 [2024-04-19 04:13:20.904166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.557 04:13:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.557 04:13:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.557 04:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.557 04:13:20 -- host/discovery.sh@59 -- # sort 00:22:06.557 04:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.557 04:13:20 -- host/discovery.sh@59 -- # xargs 00:22:06.557 [2024-04-19 04:13:20.914078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.557 04:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.557 [2024-04-19 04:13:20.924119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.557 [2024-04-19 04:13:20.924468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.924762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.924777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.557 [2024-04-19 04:13:20.924789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.557 [2024-04-19 04:13:20.924805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.557 [2024-04-19 04:13:20.924820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.557 [2024-04-19 04:13:20.924829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.557 [2024-04-19 04:13:20.924839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.557 [2024-04-19 04:13:20.924855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.557 [2024-04-19 04:13:20.934184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.557 [2024-04-19 04:13:20.934498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.934701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.934716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.557 [2024-04-19 04:13:20.934726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.557 [2024-04-19 04:13:20.934741] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.557 [2024-04-19 04:13:20.934754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.557 [2024-04-19 04:13:20.934763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.557 [2024-04-19 04:13:20.934772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.557 [2024-04-19 04:13:20.934787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.557 [2024-04-19 04:13:20.944246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.557 [2024-04-19 04:13:20.944547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.944731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.944745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.557 [2024-04-19 04:13:20.944755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.557 [2024-04-19 04:13:20.944770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.557 [2024-04-19 04:13:20.944784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.557 [2024-04-19 04:13:20.944793] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.557 [2024-04-19 04:13:20.944802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.557 [2024-04-19 04:13:20.944816] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.557 04:13:20 -- common/autotest_common.sh@904 -- # return 0 00:22:06.557 04:13:20 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.557 04:13:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.557 04:13:20 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.557 04:13:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:06.557 [2024-04-19 04:13:20.954312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.557 [2024-04-19 04:13:20.954485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.954687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.557 [2024-04-19 04:13:20.954701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.557 [2024-04-19 04:13:20.954711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.557 [2024-04-19 04:13:20.954726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.557 [2024-04-19 04:13:20.954740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.557 [2024-04-19 04:13:20.954748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.557 [2024-04-19 04:13:20.954758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.557 [2024-04-19 04:13:20.954772] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.557 04:13:20 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:06.557 04:13:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.557 04:13:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.557 04:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.557 04:13:20 -- host/discovery.sh@55 -- # sort 00:22:06.557 04:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:06.557 04:13:20 -- host/discovery.sh@55 -- # xargs 00:22:06.557 [2024-04-19 04:13:20.964374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.558 [2024-04-19 04:13:20.964674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.558 [2024-04-19 04:13:20.964926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.558 [2024-04-19 04:13:20.964940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.558 [2024-04-19 04:13:20.964950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.558 [2024-04-19 04:13:20.964965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.558 [2024-04-19 04:13:20.964979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.558 [2024-04-19 04:13:20.964987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.558 [2024-04-19 04:13:20.964997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.558 [2024-04-19 04:13:20.965011] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.558 [2024-04-19 04:13:20.974440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.558 [2024-04-19 04:13:20.974676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.558 [2024-04-19 04:13:20.974815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.558 [2024-04-19 04:13:20.974830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798ab0 with addr=10.0.0.2, port=4420 00:22:06.558 [2024-04-19 04:13:20.974840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798ab0 is same with the state(5) to be set 00:22:06.558 [2024-04-19 04:13:20.974863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798ab0 (9): Bad file descriptor 00:22:06.558 [2024-04-19 04:13:20.974876] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.558 [2024-04-19 04:13:20.974884] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.558 [2024-04-19 04:13:20.974894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.558 [2024-04-19 04:13:20.974907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.558 [2024-04-19 04:13:20.982253] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:06.558 [2024-04-19 04:13:20.982275] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.558 04:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:06.558 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.558 04:13:21 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.558 04:13:21 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.558 04:13:21 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.558 04:13:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:06.558 04:13:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.558 04:13:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.558 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.558 04:13:21 -- host/discovery.sh@63 -- # sort -n 00:22:06.558 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.558 04:13:21 -- host/discovery.sh@63 -- # xargs 00:22:06.558 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.558 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.558 04:13:21 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:06.558 04:13:21 -- host/discovery.sh@79 -- # expected_count=0 00:22:06.558 04:13:21 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.558 04:13:21 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.558 04:13:21 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.558 04:13:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.558 04:13:21 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:06.558 04:13:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.558 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.558 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.558 04:13:21 -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.558 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.816 04:13:21 -- host/discovery.sh@74 -- # notification_count=0 00:22:06.816 04:13:21 -- host/discovery.sh@75 -- # notify_id=2 00:22:06.816 04:13:21 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:06.816 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.816 04:13:21 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:06.816 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.816 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.816 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.816 04:13:21 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.816 04:13:21 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.816 04:13:21 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.816 04:13:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.816 04:13:21 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:06.816 04:13:21 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:06.816 04:13:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.816 04:13:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.816 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.816 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.816 04:13:21 -- host/discovery.sh@59 -- # sort 00:22:06.816 04:13:21 -- host/discovery.sh@59 -- # xargs 00:22:06.816 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.816 04:13:21 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:22:06.816 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.816 04:13:21 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:06.816 04:13:21 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:06.816 04:13:21 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.816 04:13:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:06.817 04:13:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.817 04:13:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.817 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.817 04:13:21 -- host/discovery.sh@55 -- # sort 00:22:06.817 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 04:13:21 -- host/discovery.sh@55 -- # xargs 00:22:06.817 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:22:06.817 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.817 04:13:21 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:06.817 04:13:21 -- host/discovery.sh@79 -- # expected_count=2 00:22:06.817 04:13:21 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.817 04:13:21 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.817 04:13:21 -- common/autotest_common.sh@901 -- # local max=10 00:22:06.817 04:13:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:06.817 04:13:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.817 04:13:21 -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.817 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.817 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 04:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.817 04:13:21 -- host/discovery.sh@74 -- # notification_count=2 00:22:06.817 04:13:21 -- host/discovery.sh@75 -- # notify_id=4 00:22:06.817 04:13:21 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:06.817 04:13:21 -- common/autotest_common.sh@904 -- # return 0 00:22:06.817 04:13:21 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:06.817 04:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.817 04:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:08.213 [2024-04-19 04:13:22.349508] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:08.213 [2024-04-19 04:13:22.349529] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:08.213 [2024-04-19 04:13:22.349546] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.213 [2024-04-19 04:13:22.435829] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:08.473 [2024-04-19 04:13:22.741824] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:08.473 [2024-04-19 04:13:22.741861] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@638 -- # local es=0 00:22:08.473 04:13:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 request: 00:22:08.473 { 00:22:08.473 "name": "nvme", 00:22:08.473 "trtype": "tcp", 00:22:08.473 "traddr": "10.0.0.2", 00:22:08.473 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.473 "adrfam": "ipv4", 00:22:08.473 "trsvcid": "8009", 00:22:08.473 "wait_for_attach": true, 00:22:08.473 "method": "bdev_nvme_start_discovery", 00:22:08.473 "req_id": 1 00:22:08.473 } 00:22:08.473 Got JSON-RPC error response 00:22:08.473 response: 00:22:08.473 { 00:22:08.473 "code": -17, 00:22:08.473 "message": "File exists" 00:22:08.473 } 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:08.473 04:13:22 -- common/autotest_common.sh@641 -- # es=1 00:22:08.473 04:13:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:08.473 04:13:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:08.473 04:13:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:08.473 04:13:22 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # sort 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # xargs 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:08.473 04:13:22 -- host/discovery.sh@146 -- # get_bdev_list 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # xargs 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # sort 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@638 -- # local es=0 00:22:08.473 04:13:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 request: 00:22:08.473 { 00:22:08.473 "name": "nvme_second", 00:22:08.473 "trtype": "tcp", 00:22:08.473 "traddr": "10.0.0.2", 00:22:08.473 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.473 "adrfam": "ipv4", 00:22:08.473 "trsvcid": "8009", 00:22:08.473 "wait_for_attach": true, 00:22:08.473 "method": "bdev_nvme_start_discovery", 00:22:08.473 "req_id": 1 00:22:08.473 } 00:22:08.473 Got JSON-RPC error response 00:22:08.473 response: 00:22:08.473 { 00:22:08.473 "code": -17, 00:22:08.473 "message": "File exists" 00:22:08.473 } 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:08.473 04:13:22 -- common/autotest_common.sh@641 -- # es=1 00:22:08.473 04:13:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:08.473 04:13:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:08.473 04:13:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:08.473 04:13:22 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # sort 00:22:08.473 04:13:22 -- host/discovery.sh@67 -- # xargs 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:08.473 04:13:22 -- host/discovery.sh@152 -- # get_bdev_list 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # xargs 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- host/discovery.sh@55 -- # sort 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.473 04:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.473 04:13:22 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.473 04:13:22 -- common/autotest_common.sh@638 -- # local es=0 00:22:08.473 04:13:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.473 04:13:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:08.473 04:13:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:08.473 04:13:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.473 04:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.473 04:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:09.849 [2024-04-19 04:13:24.001369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.849 [2024-04-19 04:13:24.001667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.849 [2024-04-19 04:13:24.001684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7950e0 with addr=10.0.0.2, port=8010 00:22:09.849 [2024-04-19 04:13:24.001699] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:09.849 [2024-04-19 04:13:24.001709] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:09.849 [2024-04-19 04:13:24.001717] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:10.784 [2024-04-19 04:13:25.003812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.784 [2024-04-19 04:13:25.004112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.784 [2024-04-19 04:13:25.004128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a2a30 with addr=10.0.0.2, port=8010 00:22:10.784 [2024-04-19 04:13:25.004142] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:10.784 [2024-04-19 04:13:25.004152] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.784 [2024-04-19 04:13:25.004160] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:11.732 [2024-04-19 04:13:26.006012] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:11.732 request: 00:22:11.732 { 00:22:11.732 "name": "nvme_second", 00:22:11.732 "trtype": "tcp", 00:22:11.732 "traddr": "10.0.0.2", 00:22:11.732 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:11.732 "adrfam": "ipv4", 00:22:11.732 "trsvcid": "8010", 00:22:11.732 "attach_timeout_ms": 3000, 00:22:11.732 "method": "bdev_nvme_start_discovery", 00:22:11.732 "req_id": 1 00:22:11.732 } 00:22:11.732 Got JSON-RPC error response 00:22:11.732 response: 00:22:11.732 { 00:22:11.732 "code": -110, 00:22:11.732 "message": "Connection timed out" 00:22:11.732 } 00:22:11.732 04:13:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:11.732 04:13:26 -- common/autotest_common.sh@641 -- # es=1 00:22:11.732 04:13:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:11.732 04:13:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:11.732 04:13:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:11.732 04:13:26 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:11.732 04:13:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:11.732 04:13:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:11.732 04:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.732 04:13:26 -- host/discovery.sh@67 -- # sort 00:22:11.732 04:13:26 -- common/autotest_common.sh@10 -- # set +x 00:22:11.732 04:13:26 -- host/discovery.sh@67 -- # xargs 00:22:11.732 04:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.732 04:13:26 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:11.732 04:13:26 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:11.732 04:13:26 -- host/discovery.sh@161 -- # kill 3916674 00:22:11.732 04:13:26 -- host/discovery.sh@162 -- # nvmftestfini 00:22:11.732 04:13:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:11.732 04:13:26 -- nvmf/common.sh@117 -- # sync 00:22:11.732 04:13:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.732 04:13:26 -- nvmf/common.sh@120 -- # set +e 00:22:11.732 04:13:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.732 04:13:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.732 rmmod nvme_tcp 00:22:11.732 rmmod nvme_fabrics 00:22:11.732 rmmod nvme_keyring 00:22:11.732 04:13:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.732 04:13:26 -- nvmf/common.sh@124 -- # set -e 00:22:11.732 04:13:26 -- nvmf/common.sh@125 -- # return 0 00:22:11.732 04:13:26 -- nvmf/common.sh@478 -- # '[' -n 3916448 ']' 00:22:11.732 04:13:26 -- nvmf/common.sh@479 -- # killprocess 3916448 00:22:11.732 04:13:26 -- common/autotest_common.sh@936 -- # '[' -z 3916448 ']' 00:22:11.732 04:13:26 -- common/autotest_common.sh@940 -- # kill -0 3916448 00:22:11.732 04:13:26 -- common/autotest_common.sh@941 -- # uname 00:22:11.732 04:13:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.732 04:13:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3916448 00:22:11.732 04:13:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:11.732 04:13:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:11.732 04:13:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3916448' 00:22:11.732 killing process with pid 3916448 00:22:11.732 04:13:26 -- common/autotest_common.sh@955 -- # kill 3916448 00:22:11.732 04:13:26 -- common/autotest_common.sh@960 -- # wait 3916448 00:22:11.992 04:13:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:11.992 04:13:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:11.992 04:13:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:11.992 04:13:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.992 04:13:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.992 04:13:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.992 04:13:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.992 04:13:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.527 04:13:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.527 00:22:14.527 real 0m17.478s 00:22:14.527 user 0m21.490s 00:22:14.527 sys 0m5.758s 00:22:14.527 04:13:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:14.527 04:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:14.527 ************************************ 00:22:14.527 END TEST nvmf_discovery 00:22:14.527 ************************************ 00:22:14.527 04:13:28 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:14.527 04:13:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:14.527 04:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:14.527 04:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:14.527 ************************************ 00:22:14.527 START TEST nvmf_discovery_remove_ifc 00:22:14.527 ************************************ 00:22:14.527 04:13:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:14.527 * Looking for test storage... 00:22:14.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.527 04:13:28 -- nvmf/common.sh@7 -- # uname -s 00:22:14.527 04:13:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.527 04:13:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.527 04:13:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.527 04:13:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.527 04:13:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.527 04:13:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.527 04:13:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.527 04:13:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.527 04:13:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.527 04:13:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.527 04:13:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:14.527 04:13:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:14.527 04:13:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.527 04:13:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.527 04:13:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.527 04:13:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.527 04:13:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.527 04:13:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.527 04:13:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.527 04:13:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.527 04:13:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.527 04:13:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.527 04:13:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.527 04:13:28 -- paths/export.sh@5 -- # export PATH 00:22:14.527 04:13:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.527 04:13:28 -- nvmf/common.sh@47 -- # : 0 00:22:14.527 04:13:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.527 04:13:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.527 04:13:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.527 04:13:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.527 04:13:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.527 04:13:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.527 04:13:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.527 04:13:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:14.527 04:13:28 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:14.527 04:13:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:14.527 04:13:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.527 04:13:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:14.527 04:13:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:14.527 04:13:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:14.527 04:13:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.527 04:13:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.527 04:13:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.527 04:13:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:14.527 04:13:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:14.527 04:13:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.527 04:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:19.835 04:13:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:19.835 04:13:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.835 04:13:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.835 04:13:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.835 04:13:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.835 04:13:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.835 04:13:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.835 04:13:34 -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.835 04:13:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.835 04:13:34 -- nvmf/common.sh@296 -- # e810=() 00:22:19.835 04:13:34 -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.835 04:13:34 -- nvmf/common.sh@297 -- # x722=() 00:22:19.835 04:13:34 -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.835 04:13:34 -- nvmf/common.sh@298 -- # mlx=() 00:22:19.835 04:13:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.835 04:13:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.835 04:13:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.835 04:13:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.835 04:13:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.835 04:13:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:19.835 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:19.835 04:13:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.835 04:13:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:19.835 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:19.835 04:13:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.835 04:13:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.835 04:13:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.835 04:13:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:19.835 Found net devices under 0000:af:00.0: cvl_0_0 00:22:19.835 04:13:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.835 04:13:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.835 04:13:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.835 04:13:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.835 04:13:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:19.835 Found net devices under 0000:af:00.1: cvl_0_1 00:22:19.835 04:13:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.835 04:13:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:19.835 04:13:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:19.835 04:13:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:19.835 04:13:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.835 04:13:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.835 04:13:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.835 04:13:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.835 04:13:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.835 04:13:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.835 04:13:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.835 04:13:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.835 04:13:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.835 04:13:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.835 04:13:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.835 04:13:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.835 04:13:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.835 04:13:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.835 04:13:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.835 04:13:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.835 04:13:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.094 04:13:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.094 04:13:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.094 04:13:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:20.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:22:20.094 00:22:20.094 --- 10.0.0.2 ping statistics --- 00:22:20.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.094 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:20.094 04:13:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:22:20.094 00:22:20.094 --- 10.0.0.1 ping statistics --- 00:22:20.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.094 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:20.094 04:13:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.094 04:13:34 -- nvmf/common.sh@411 -- # return 0 00:22:20.094 04:13:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:20.094 04:13:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.094 04:13:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:20.094 04:13:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:20.094 04:13:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.094 04:13:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:20.095 04:13:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:20.095 04:13:34 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:20.095 04:13:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:20.095 04:13:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:20.095 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.095 04:13:34 -- nvmf/common.sh@470 -- # nvmfpid=3921896 00:22:20.095 04:13:34 -- nvmf/common.sh@471 -- # waitforlisten 3921896 00:22:20.095 04:13:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:20.095 04:13:34 -- common/autotest_common.sh@817 -- # '[' -z 3921896 ']' 00:22:20.095 04:13:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.095 04:13:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.095 04:13:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.095 04:13:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.095 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.095 [2024-04-19 04:13:34.488974] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:20.095 [2024-04-19 04:13:34.489028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.095 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.095 [2024-04-19 04:13:34.566521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.354 [2024-04-19 04:13:34.655069] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.354 [2024-04-19 04:13:34.655114] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.354 [2024-04-19 04:13:34.655125] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.354 [2024-04-19 04:13:34.655133] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.354 [2024-04-19 04:13:34.655141] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.354 [2024-04-19 04:13:34.655161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.354 04:13:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.354 04:13:34 -- common/autotest_common.sh@850 -- # return 0 00:22:20.354 04:13:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:20.354 04:13:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:20.354 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.354 04:13:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.354 04:13:34 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:20.354 04:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.354 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.354 [2024-04-19 04:13:34.806860] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.354 [2024-04-19 04:13:34.815023] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:20.354 null0 00:22:20.354 [2024-04-19 04:13:34.847018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.354 04:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.354 04:13:34 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3922122 00:22:20.354 04:13:34 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3922122 /tmp/host.sock 00:22:20.354 04:13:34 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:20.354 04:13:34 -- common/autotest_common.sh@817 -- # '[' -z 3922122 ']' 00:22:20.354 04:13:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:20.354 04:13:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.354 04:13:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:20.354 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:20.354 04:13:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.354 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:20.613 [2024-04-19 04:13:34.916237] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:22:20.613 [2024-04-19 04:13:34.916293] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922122 ] 00:22:20.613 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.613 [2024-04-19 04:13:34.995750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.613 [2024-04-19 04:13:35.085021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.613 04:13:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.613 04:13:35 -- common/autotest_common.sh@850 -- # return 0 00:22:20.613 04:13:35 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.613 04:13:35 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:20.613 04:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.613 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:22:20.613 04:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.613 04:13:35 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:20.613 04:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.613 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:22:20.872 04:13:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.872 04:13:35 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:20.872 04:13:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.872 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.809 [2024-04-19 04:13:36.223805] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:21.809 [2024-04-19 04:13:36.223829] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:21.809 [2024-04-19 04:13:36.223846] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:22.068 [2024-04-19 04:13:36.350310] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:22.068 [2024-04-19 04:13:36.569346] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:22.068 [2024-04-19 04:13:36.569405] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:22.068 [2024-04-19 04:13:36.569432] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:22.068 [2024-04-19 04:13:36.569449] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:22.068 [2024-04-19 04:13:36.569474] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:22.068 04:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.068 [2024-04-19 04:13:36.573394] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2338a70 was disconnected and freed. delete nvme_qpair. 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.068 04:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.068 04:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.068 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.326 04:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:22.326 04:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:22.326 04:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:22.326 04:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:22.326 04:13:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:23.703 04:13:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:23.703 04:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:23.703 04:13:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:23.703 04:13:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:24.640 04:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:24.640 04:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:24.640 04:13:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:24.640 04:13:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:25.578 04:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:25.578 04:13:39 -- common/autotest_common.sh@10 -- # set +x 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:25.578 04:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:25.578 04:13:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:26.516 04:13:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:26.516 04:13:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.516 04:13:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:26.516 04:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.516 04:13:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:26.516 04:13:40 -- common/autotest_common.sh@10 -- # set +x 00:22:26.516 04:13:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:26.516 04:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.516 04:13:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:26.516 04:13:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:27.892 [2024-04-19 04:13:42.010274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:27.892 [2024-04-19 04:13:42.010325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.893 [2024-04-19 04:13:42.010340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.893 [2024-04-19 04:13:42.010359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.893 [2024-04-19 04:13:42.010369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.893 [2024-04-19 04:13:42.010379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.893 [2024-04-19 04:13:42.010389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.893 [2024-04-19 04:13:42.010400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.893 [2024-04-19 04:13:42.010410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.893 [2024-04-19 04:13:42.010421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.893 [2024-04-19 04:13:42.010432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.893 [2024-04-19 04:13:42.010442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ffc70 is same with the state(5) to be set 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.893 04:13:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.893 04:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.893 [2024-04-19 04:13:42.020294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ffc70 (9): Bad file descriptor 00:22:27.893 [2024-04-19 04:13:42.030338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:27.893 04:13:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:27.893 04:13:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.829 [2024-04-19 04:13:43.055416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:28.829 04:13:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.829 04:13:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.829 04:13:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.829 04:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.829 04:13:43 -- common/autotest_common.sh@10 -- # set +x 00:22:28.829 04:13:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.829 04:13:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.767 [2024-04-19 04:13:44.079396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:29.767 [2024-04-19 04:13:44.079474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ffc70 with addr=10.0.0.2, port=4420 00:22:29.767 [2024-04-19 04:13:44.079506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ffc70 is same with the state(5) to be set 00:22:29.767 [2024-04-19 04:13:44.080420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ffc70 (9): Bad file descriptor 00:22:29.767 [2024-04-19 04:13:44.080477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:29.767 [2024-04-19 04:13:44.080533] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:29.767 [2024-04-19 04:13:44.080581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.767 [2024-04-19 04:13:44.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.767 [2024-04-19 04:13:44.080636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.767 [2024-04-19 04:13:44.080659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.767 [2024-04-19 04:13:44.080683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.767 [2024-04-19 04:13:44.080705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.767 [2024-04-19 04:13:44.080728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.767 [2024-04-19 04:13:44.080749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.767 [2024-04-19 04:13:44.080774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.767 [2024-04-19 04:13:44.080796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.767 [2024-04-19 04:13:44.080817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:29.767 [2024-04-19 04:13:44.080872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff0f0 (9): Bad file descriptor 00:22:29.767 [2024-04-19 04:13:44.081874] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:29.767 [2024-04-19 04:13:44.081906] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:29.767 04:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.767 04:13:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.767 04:13:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.705 04:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.705 04:13:45 -- common/autotest_common.sh@10 -- # set +x 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.705 04:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.705 04:13:45 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.965 04:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.965 04:13:45 -- common/autotest_common.sh@10 -- # set +x 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.965 04:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:30.965 04:13:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.900 [2024-04-19 04:13:46.140557] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:31.900 [2024-04-19 04:13:46.140582] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:31.900 [2024-04-19 04:13:46.140601] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:31.900 [2024-04-19 04:13:46.226891] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:31.900 [2024-04-19 04:13:46.289807] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:31.900 [2024-04-19 04:13:46.289850] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:31.900 [2024-04-19 04:13:46.289874] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:31.900 [2024-04-19 04:13:46.289892] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:31.900 [2024-04-19 04:13:46.289902] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:31.900 [2024-04-19 04:13:46.298768] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23436f0 was disconnected and freed. delete nvme_qpair. 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.900 04:13:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.900 04:13:46 -- common/autotest_common.sh@10 -- # set +x 00:22:31.900 04:13:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:31.900 04:13:46 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3922122 00:22:31.900 04:13:46 -- common/autotest_common.sh@936 -- # '[' -z 3922122 ']' 00:22:31.900 04:13:46 -- common/autotest_common.sh@940 -- # kill -0 3922122 00:22:31.900 04:13:46 -- common/autotest_common.sh@941 -- # uname 00:22:31.900 04:13:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.900 04:13:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3922122 00:22:31.900 04:13:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:31.900 04:13:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:31.900 04:13:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3922122' 00:22:31.900 killing process with pid 3922122 00:22:31.900 04:13:46 -- common/autotest_common.sh@955 -- # kill 3922122 00:22:31.900 04:13:46 -- common/autotest_common.sh@960 -- # wait 3922122 00:22:32.160 04:13:46 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:32.160 04:13:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:32.160 04:13:46 -- nvmf/common.sh@117 -- # sync 00:22:32.160 04:13:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.160 04:13:46 -- nvmf/common.sh@120 -- # set +e 00:22:32.160 04:13:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.160 04:13:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.160 rmmod nvme_tcp 00:22:32.160 rmmod nvme_fabrics 00:22:32.160 rmmod nvme_keyring 00:22:32.160 04:13:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.160 04:13:46 -- nvmf/common.sh@124 -- # set -e 00:22:32.160 04:13:46 -- nvmf/common.sh@125 -- # return 0 00:22:32.160 04:13:46 -- nvmf/common.sh@478 -- # '[' -n 3921896 ']' 00:22:32.419 04:13:46 -- nvmf/common.sh@479 -- # killprocess 3921896 00:22:32.419 04:13:46 -- common/autotest_common.sh@936 -- # '[' -z 3921896 ']' 00:22:32.419 04:13:46 -- common/autotest_common.sh@940 -- # kill -0 3921896 00:22:32.419 04:13:46 -- common/autotest_common.sh@941 -- # uname 00:22:32.419 04:13:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:32.419 04:13:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3921896 00:22:32.419 04:13:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:32.419 04:13:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:32.419 04:13:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3921896' 00:22:32.419 killing process with pid 3921896 00:22:32.419 04:13:46 -- common/autotest_common.sh@955 -- # kill 3921896 00:22:32.419 04:13:46 -- common/autotest_common.sh@960 -- # wait 3921896 00:22:32.678 04:13:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:32.678 04:13:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:32.678 04:13:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:32.678 04:13:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.678 04:13:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.678 04:13:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.678 04:13:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.678 04:13:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.583 04:13:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.583 00:22:34.583 real 0m20.369s 00:22:34.584 user 0m24.055s 00:22:34.584 sys 0m5.544s 00:22:34.584 04:13:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:34.584 04:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:34.584 ************************************ 00:22:34.584 END TEST nvmf_discovery_remove_ifc 00:22:34.584 ************************************ 00:22:34.584 04:13:49 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:34.584 04:13:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:34.584 04:13:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.584 04:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:34.844 ************************************ 00:22:34.844 START TEST nvmf_identify_kernel_target 00:22:34.844 ************************************ 00:22:34.844 04:13:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:34.844 * Looking for test storage... 00:22:34.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.844 04:13:49 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.844 04:13:49 -- nvmf/common.sh@7 -- # uname -s 00:22:34.844 04:13:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.844 04:13:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.844 04:13:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.844 04:13:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.844 04:13:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.844 04:13:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.844 04:13:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.844 04:13:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.844 04:13:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.844 04:13:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.844 04:13:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:34.844 04:13:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:34.844 04:13:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.844 04:13:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.844 04:13:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.844 04:13:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.844 04:13:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.844 04:13:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.844 04:13:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.844 04:13:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.844 04:13:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.844 04:13:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.844 04:13:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.844 04:13:49 -- paths/export.sh@5 -- # export PATH 00:22:34.844 04:13:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.844 04:13:49 -- nvmf/common.sh@47 -- # : 0 00:22:34.844 04:13:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.844 04:13:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.844 04:13:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.844 04:13:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.844 04:13:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.844 04:13:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.844 04:13:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.844 04:13:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.844 04:13:49 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:34.844 04:13:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:34.844 04:13:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.844 04:13:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:34.844 04:13:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:34.844 04:13:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:34.844 04:13:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.844 04:13:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.844 04:13:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.844 04:13:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:34.844 04:13:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:34.844 04:13:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.844 04:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:41.435 04:13:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:41.435 04:13:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.435 04:13:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.435 04:13:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.435 04:13:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.435 04:13:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.435 04:13:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.435 04:13:54 -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.435 04:13:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.435 04:13:54 -- nvmf/common.sh@296 -- # e810=() 00:22:41.435 04:13:54 -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.435 04:13:54 -- nvmf/common.sh@297 -- # x722=() 00:22:41.435 04:13:54 -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.435 04:13:54 -- nvmf/common.sh@298 -- # mlx=() 00:22:41.435 04:13:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.435 04:13:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.435 04:13:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.435 04:13:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:41.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:41.435 04:13:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.435 04:13:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:41.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:41.435 04:13:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.435 04:13:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.435 04:13:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.435 04:13:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:41.435 Found net devices under 0000:af:00.0: cvl_0_0 00:22:41.435 04:13:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.435 04:13:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.435 04:13:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.435 04:13:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:41.435 Found net devices under 0000:af:00.1: cvl_0_1 00:22:41.435 04:13:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:41.435 04:13:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:41.435 04:13:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.435 04:13:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.435 04:13:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.435 04:13:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.435 04:13:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.435 04:13:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.435 04:13:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.435 04:13:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.435 04:13:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.435 04:13:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.435 04:13:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.435 04:13:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.435 04:13:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.435 04:13:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.435 04:13:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.435 04:13:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.435 04:13:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.435 04:13:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.435 04:13:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:41.435 00:22:41.435 --- 10.0.0.2 ping statistics --- 00:22:41.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.435 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:41.435 04:13:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:41.435 00:22:41.435 --- 10.0.0.1 ping statistics --- 00:22:41.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.435 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:41.435 04:13:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.435 04:13:54 -- nvmf/common.sh@411 -- # return 0 00:22:41.435 04:13:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:41.435 04:13:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.435 04:13:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:41.435 04:13:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.435 04:13:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:41.435 04:13:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:41.435 04:13:55 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:41.435 04:13:55 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:41.435 04:13:55 -- nvmf/common.sh@717 -- # local ip 00:22:41.435 04:13:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.435 04:13:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.435 04:13:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.435 04:13:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.435 04:13:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:41.435 04:13:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.435 04:13:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:41.435 04:13:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:41.435 04:13:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:41.435 04:13:55 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:41.435 04:13:55 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:41.435 04:13:55 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:41.435 04:13:55 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:41.435 04:13:55 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:41.435 04:13:55 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:41.435 04:13:55 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:41.435 04:13:55 -- nvmf/common.sh@628 -- # local block nvme 00:22:41.435 04:13:55 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:41.435 04:13:55 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:41.435 04:13:55 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:41.435 04:13:55 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:43.343 Waiting for block devices as requested 00:22:43.343 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:22:43.343 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:43.602 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:43.602 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:43.602 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:43.861 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:43.861 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:43.861 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:43.861 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:44.120 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:44.120 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:44.120 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:44.379 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:44.379 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:44.379 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:44.379 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:44.638 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:44.638 04:13:59 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:44.638 04:13:59 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:44.638 04:13:59 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:44.638 04:13:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:44.638 04:13:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:44.638 04:13:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:44.638 04:13:59 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:44.638 04:13:59 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:44.638 04:13:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:44.638 No valid GPT data, bailing 00:22:44.638 04:13:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:44.638 04:13:59 -- scripts/common.sh@391 -- # pt= 00:22:44.638 04:13:59 -- scripts/common.sh@392 -- # return 1 00:22:44.638 04:13:59 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:44.638 04:13:59 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:44.638 04:13:59 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:44.638 04:13:59 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:44.638 04:13:59 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:44.638 04:13:59 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:44.638 04:13:59 -- nvmf/common.sh@656 -- # echo 1 00:22:44.638 04:13:59 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:44.638 04:13:59 -- nvmf/common.sh@658 -- # echo 1 00:22:44.638 04:13:59 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:44.638 04:13:59 -- nvmf/common.sh@661 -- # echo tcp 00:22:44.638 04:13:59 -- nvmf/common.sh@662 -- # echo 4420 00:22:44.639 04:13:59 -- nvmf/common.sh@663 -- # echo ipv4 00:22:44.639 04:13:59 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:44.639 04:13:59 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:22:44.899 00:22:44.899 Discovery Log Number of Records 2, Generation counter 2 00:22:44.899 =====Discovery Log Entry 0====== 00:22:44.899 trtype: tcp 00:22:44.899 adrfam: ipv4 00:22:44.899 subtype: current discovery subsystem 00:22:44.899 treq: not specified, sq flow control disable supported 00:22:44.899 portid: 1 00:22:44.899 trsvcid: 4420 00:22:44.899 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:44.899 traddr: 10.0.0.1 00:22:44.899 eflags: none 00:22:44.899 sectype: none 00:22:44.899 =====Discovery Log Entry 1====== 00:22:44.899 trtype: tcp 00:22:44.899 adrfam: ipv4 00:22:44.899 subtype: nvme subsystem 00:22:44.899 treq: not specified, sq flow control disable supported 00:22:44.899 portid: 1 00:22:44.899 trsvcid: 4420 00:22:44.899 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:44.899 traddr: 10.0.0.1 00:22:44.899 eflags: none 00:22:44.899 sectype: none 00:22:44.899 04:13:59 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:44.899 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:44.899 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.899 ===================================================== 00:22:44.899 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:44.899 ===================================================== 00:22:44.899 Controller Capabilities/Features 00:22:44.899 ================================ 00:22:44.899 Vendor ID: 0000 00:22:44.899 Subsystem Vendor ID: 0000 00:22:44.899 Serial Number: 44e7ea24c2069ae9c897 00:22:44.899 Model Number: Linux 00:22:44.899 Firmware Version: 6.7.0-68 00:22:44.899 Recommended Arb Burst: 0 00:22:44.899 IEEE OUI Identifier: 00 00 00 00:22:44.899 Multi-path I/O 00:22:44.899 May have multiple subsystem ports: No 00:22:44.899 May have multiple controllers: No 00:22:44.899 Associated with SR-IOV VF: No 00:22:44.899 Max Data Transfer Size: Unlimited 00:22:44.899 Max Number of Namespaces: 0 00:22:44.899 Max Number of I/O Queues: 1024 00:22:44.899 NVMe Specification Version (VS): 1.3 00:22:44.899 NVMe Specification Version (Identify): 1.3 00:22:44.899 Maximum Queue Entries: 1024 00:22:44.899 Contiguous Queues Required: No 00:22:44.899 Arbitration Mechanisms Supported 00:22:44.899 Weighted Round Robin: Not Supported 00:22:44.899 Vendor Specific: Not Supported 00:22:44.899 Reset Timeout: 7500 ms 00:22:44.899 Doorbell Stride: 4 bytes 00:22:44.899 NVM Subsystem Reset: Not Supported 00:22:44.899 Command Sets Supported 00:22:44.899 NVM Command Set: Supported 00:22:44.899 Boot Partition: Not Supported 00:22:44.899 Memory Page Size Minimum: 4096 bytes 00:22:44.899 Memory Page Size Maximum: 4096 bytes 00:22:44.899 Persistent Memory Region: Not Supported 00:22:44.899 Optional Asynchronous Events Supported 00:22:44.899 Namespace Attribute Notices: Not Supported 00:22:44.899 Firmware Activation Notices: Not Supported 00:22:44.899 ANA Change Notices: Not Supported 00:22:44.899 PLE Aggregate Log Change Notices: Not Supported 00:22:44.899 LBA Status Info Alert Notices: Not Supported 00:22:44.899 EGE Aggregate Log Change Notices: Not Supported 00:22:44.899 Normal NVM Subsystem Shutdown event: Not Supported 00:22:44.899 Zone Descriptor Change Notices: Not Supported 00:22:44.899 Discovery Log Change Notices: Supported 00:22:44.899 Controller Attributes 00:22:44.899 128-bit Host Identifier: Not Supported 00:22:44.899 Non-Operational Permissive Mode: Not Supported 00:22:44.899 NVM Sets: Not Supported 00:22:44.899 Read Recovery Levels: Not Supported 00:22:44.899 Endurance Groups: Not Supported 00:22:44.899 Predictable Latency Mode: Not Supported 00:22:44.899 Traffic Based Keep ALive: Not Supported 00:22:44.899 Namespace Granularity: Not Supported 00:22:44.899 SQ Associations: Not Supported 00:22:44.899 UUID List: Not Supported 00:22:44.899 Multi-Domain Subsystem: Not Supported 00:22:44.899 Fixed Capacity Management: Not Supported 00:22:44.899 Variable Capacity Management: Not Supported 00:22:44.899 Delete Endurance Group: Not Supported 00:22:44.899 Delete NVM Set: Not Supported 00:22:44.899 Extended LBA Formats Supported: Not Supported 00:22:44.899 Flexible Data Placement Supported: Not Supported 00:22:44.900 00:22:44.900 Controller Memory Buffer Support 00:22:44.900 ================================ 00:22:44.900 Supported: No 00:22:44.900 00:22:44.900 Persistent Memory Region Support 00:22:44.900 ================================ 00:22:44.900 Supported: No 00:22:44.900 00:22:44.900 Admin Command Set Attributes 00:22:44.900 ============================ 00:22:44.900 Security Send/Receive: Not Supported 00:22:44.900 Format NVM: Not Supported 00:22:44.900 Firmware Activate/Download: Not Supported 00:22:44.900 Namespace Management: Not Supported 00:22:44.900 Device Self-Test: Not Supported 00:22:44.900 Directives: Not Supported 00:22:44.900 NVMe-MI: Not Supported 00:22:44.900 Virtualization Management: Not Supported 00:22:44.900 Doorbell Buffer Config: Not Supported 00:22:44.900 Get LBA Status Capability: Not Supported 00:22:44.900 Command & Feature Lockdown Capability: Not Supported 00:22:44.900 Abort Command Limit: 1 00:22:44.900 Async Event Request Limit: 1 00:22:44.900 Number of Firmware Slots: N/A 00:22:44.900 Firmware Slot 1 Read-Only: N/A 00:22:44.900 Firmware Activation Without Reset: N/A 00:22:44.900 Multiple Update Detection Support: N/A 00:22:44.900 Firmware Update Granularity: No Information Provided 00:22:44.900 Per-Namespace SMART Log: No 00:22:44.900 Asymmetric Namespace Access Log Page: Not Supported 00:22:44.900 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:44.900 Command Effects Log Page: Not Supported 00:22:44.900 Get Log Page Extended Data: Supported 00:22:44.900 Telemetry Log Pages: Not Supported 00:22:44.900 Persistent Event Log Pages: Not Supported 00:22:44.900 Supported Log Pages Log Page: May Support 00:22:44.900 Commands Supported & Effects Log Page: Not Supported 00:22:44.900 Feature Identifiers & Effects Log Page:May Support 00:22:44.900 NVMe-MI Commands & Effects Log Page: May Support 00:22:44.900 Data Area 4 for Telemetry Log: Not Supported 00:22:44.900 Error Log Page Entries Supported: 1 00:22:44.900 Keep Alive: Not Supported 00:22:44.900 00:22:44.900 NVM Command Set Attributes 00:22:44.900 ========================== 00:22:44.900 Submission Queue Entry Size 00:22:44.900 Max: 1 00:22:44.900 Min: 1 00:22:44.900 Completion Queue Entry Size 00:22:44.900 Max: 1 00:22:44.900 Min: 1 00:22:44.900 Number of Namespaces: 0 00:22:44.900 Compare Command: Not Supported 00:22:44.900 Write Uncorrectable Command: Not Supported 00:22:44.900 Dataset Management Command: Not Supported 00:22:44.900 Write Zeroes Command: Not Supported 00:22:44.900 Set Features Save Field: Not Supported 00:22:44.900 Reservations: Not Supported 00:22:44.900 Timestamp: Not Supported 00:22:44.900 Copy: Not Supported 00:22:44.900 Volatile Write Cache: Not Present 00:22:44.900 Atomic Write Unit (Normal): 1 00:22:44.900 Atomic Write Unit (PFail): 1 00:22:44.900 Atomic Compare & Write Unit: 1 00:22:44.900 Fused Compare & Write: Not Supported 00:22:44.900 Scatter-Gather List 00:22:44.900 SGL Command Set: Supported 00:22:44.900 SGL Keyed: Not Supported 00:22:44.900 SGL Bit Bucket Descriptor: Not Supported 00:22:44.900 SGL Metadata Pointer: Not Supported 00:22:44.900 Oversized SGL: Not Supported 00:22:44.900 SGL Metadata Address: Not Supported 00:22:44.900 SGL Offset: Supported 00:22:44.900 Transport SGL Data Block: Not Supported 00:22:44.900 Replay Protected Memory Block: Not Supported 00:22:44.900 00:22:44.900 Firmware Slot Information 00:22:44.900 ========================= 00:22:44.900 Active slot: 0 00:22:44.900 00:22:44.900 00:22:44.900 Error Log 00:22:44.900 ========= 00:22:44.900 00:22:44.900 Active Namespaces 00:22:44.900 ================= 00:22:44.900 Discovery Log Page 00:22:44.900 ================== 00:22:44.900 Generation Counter: 2 00:22:44.900 Number of Records: 2 00:22:44.900 Record Format: 0 00:22:44.900 00:22:44.900 Discovery Log Entry 0 00:22:44.900 ---------------------- 00:22:44.900 Transport Type: 3 (TCP) 00:22:44.900 Address Family: 1 (IPv4) 00:22:44.900 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:44.900 Entry Flags: 00:22:44.900 Duplicate Returned Information: 0 00:22:44.900 Explicit Persistent Connection Support for Discovery: 0 00:22:44.900 Transport Requirements: 00:22:44.900 Secure Channel: Not Specified 00:22:44.900 Port ID: 1 (0x0001) 00:22:44.900 Controller ID: 65535 (0xffff) 00:22:44.900 Admin Max SQ Size: 32 00:22:44.900 Transport Service Identifier: 4420 00:22:44.900 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:44.900 Transport Address: 10.0.0.1 00:22:44.900 Discovery Log Entry 1 00:22:44.900 ---------------------- 00:22:44.900 Transport Type: 3 (TCP) 00:22:44.900 Address Family: 1 (IPv4) 00:22:44.900 Subsystem Type: 2 (NVM Subsystem) 00:22:44.900 Entry Flags: 00:22:44.900 Duplicate Returned Information: 0 00:22:44.900 Explicit Persistent Connection Support for Discovery: 0 00:22:44.900 Transport Requirements: 00:22:44.900 Secure Channel: Not Specified 00:22:44.900 Port ID: 1 (0x0001) 00:22:44.900 Controller ID: 65535 (0xffff) 00:22:44.900 Admin Max SQ Size: 32 00:22:44.900 Transport Service Identifier: 4420 00:22:44.900 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:44.900 Transport Address: 10.0.0.1 00:22:44.900 04:13:59 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:44.900 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.900 get_feature(0x01) failed 00:22:44.900 get_feature(0x02) failed 00:22:44.900 get_feature(0x04) failed 00:22:44.900 ===================================================== 00:22:44.900 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:44.900 ===================================================== 00:22:44.900 Controller Capabilities/Features 00:22:44.900 ================================ 00:22:44.900 Vendor ID: 0000 00:22:44.900 Subsystem Vendor ID: 0000 00:22:44.900 Serial Number: f1ed7aa84eb2f3d74003 00:22:44.900 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:44.900 Firmware Version: 6.7.0-68 00:22:44.900 Recommended Arb Burst: 6 00:22:44.900 IEEE OUI Identifier: 00 00 00 00:22:44.900 Multi-path I/O 00:22:44.900 May have multiple subsystem ports: Yes 00:22:44.900 May have multiple controllers: Yes 00:22:44.900 Associated with SR-IOV VF: No 00:22:44.900 Max Data Transfer Size: Unlimited 00:22:44.900 Max Number of Namespaces: 1024 00:22:44.900 Max Number of I/O Queues: 128 00:22:44.900 NVMe Specification Version (VS): 1.3 00:22:44.900 NVMe Specification Version (Identify): 1.3 00:22:44.900 Maximum Queue Entries: 1024 00:22:44.900 Contiguous Queues Required: No 00:22:44.900 Arbitration Mechanisms Supported 00:22:44.900 Weighted Round Robin: Not Supported 00:22:44.900 Vendor Specific: Not Supported 00:22:44.900 Reset Timeout: 7500 ms 00:22:44.900 Doorbell Stride: 4 bytes 00:22:44.900 NVM Subsystem Reset: Not Supported 00:22:44.900 Command Sets Supported 00:22:44.900 NVM Command Set: Supported 00:22:44.900 Boot Partition: Not Supported 00:22:44.900 Memory Page Size Minimum: 4096 bytes 00:22:44.900 Memory Page Size Maximum: 4096 bytes 00:22:44.900 Persistent Memory Region: Not Supported 00:22:44.900 Optional Asynchronous Events Supported 00:22:44.900 Namespace Attribute Notices: Supported 00:22:44.900 Firmware Activation Notices: Not Supported 00:22:44.900 ANA Change Notices: Supported 00:22:44.900 PLE Aggregate Log Change Notices: Not Supported 00:22:44.900 LBA Status Info Alert Notices: Not Supported 00:22:44.900 EGE Aggregate Log Change Notices: Not Supported 00:22:44.900 Normal NVM Subsystem Shutdown event: Not Supported 00:22:44.900 Zone Descriptor Change Notices: Not Supported 00:22:44.900 Discovery Log Change Notices: Not Supported 00:22:44.900 Controller Attributes 00:22:44.900 128-bit Host Identifier: Supported 00:22:44.900 Non-Operational Permissive Mode: Not Supported 00:22:44.900 NVM Sets: Not Supported 00:22:44.900 Read Recovery Levels: Not Supported 00:22:44.900 Endurance Groups: Not Supported 00:22:44.900 Predictable Latency Mode: Not Supported 00:22:44.900 Traffic Based Keep ALive: Supported 00:22:44.900 Namespace Granularity: Not Supported 00:22:44.900 SQ Associations: Not Supported 00:22:44.900 UUID List: Not Supported 00:22:44.900 Multi-Domain Subsystem: Not Supported 00:22:44.900 Fixed Capacity Management: Not Supported 00:22:44.900 Variable Capacity Management: Not Supported 00:22:44.900 Delete Endurance Group: Not Supported 00:22:44.900 Delete NVM Set: Not Supported 00:22:44.900 Extended LBA Formats Supported: Not Supported 00:22:44.900 Flexible Data Placement Supported: Not Supported 00:22:44.900 00:22:44.900 Controller Memory Buffer Support 00:22:44.900 ================================ 00:22:44.900 Supported: No 00:22:44.900 00:22:44.900 Persistent Memory Region Support 00:22:44.900 ================================ 00:22:44.900 Supported: No 00:22:44.900 00:22:44.900 Admin Command Set Attributes 00:22:44.901 ============================ 00:22:44.901 Security Send/Receive: Not Supported 00:22:44.901 Format NVM: Not Supported 00:22:44.901 Firmware Activate/Download: Not Supported 00:22:44.901 Namespace Management: Not Supported 00:22:44.901 Device Self-Test: Not Supported 00:22:44.901 Directives: Not Supported 00:22:44.901 NVMe-MI: Not Supported 00:22:44.901 Virtualization Management: Not Supported 00:22:44.901 Doorbell Buffer Config: Not Supported 00:22:44.901 Get LBA Status Capability: Not Supported 00:22:44.901 Command & Feature Lockdown Capability: Not Supported 00:22:44.901 Abort Command Limit: 4 00:22:44.901 Async Event Request Limit: 4 00:22:44.901 Number of Firmware Slots: N/A 00:22:44.901 Firmware Slot 1 Read-Only: N/A 00:22:44.901 Firmware Activation Without Reset: N/A 00:22:44.901 Multiple Update Detection Support: N/A 00:22:44.901 Firmware Update Granularity: No Information Provided 00:22:44.901 Per-Namespace SMART Log: Yes 00:22:44.901 Asymmetric Namespace Access Log Page: Supported 00:22:44.901 ANA Transition Time : 10 sec 00:22:44.901 00:22:44.901 Asymmetric Namespace Access Capabilities 00:22:44.901 ANA Optimized State : Supported 00:22:44.901 ANA Non-Optimized State : Supported 00:22:44.901 ANA Inaccessible State : Supported 00:22:44.901 ANA Persistent Loss State : Supported 00:22:44.901 ANA Change State : Supported 00:22:44.901 ANAGRPID is not changed : No 00:22:44.901 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:44.901 00:22:44.901 ANA Group Identifier Maximum : 128 00:22:44.901 Number of ANA Group Identifiers : 128 00:22:44.901 Max Number of Allowed Namespaces : 1024 00:22:44.901 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:44.901 Command Effects Log Page: Supported 00:22:44.901 Get Log Page Extended Data: Supported 00:22:44.901 Telemetry Log Pages: Not Supported 00:22:44.901 Persistent Event Log Pages: Not Supported 00:22:44.901 Supported Log Pages Log Page: May Support 00:22:44.901 Commands Supported & Effects Log Page: Not Supported 00:22:44.901 Feature Identifiers & Effects Log Page:May Support 00:22:44.901 NVMe-MI Commands & Effects Log Page: May Support 00:22:44.901 Data Area 4 for Telemetry Log: Not Supported 00:22:44.901 Error Log Page Entries Supported: 128 00:22:44.901 Keep Alive: Supported 00:22:44.901 Keep Alive Granularity: 1000 ms 00:22:44.901 00:22:44.901 NVM Command Set Attributes 00:22:44.901 ========================== 00:22:44.901 Submission Queue Entry Size 00:22:44.901 Max: 64 00:22:44.901 Min: 64 00:22:44.901 Completion Queue Entry Size 00:22:44.901 Max: 16 00:22:44.901 Min: 16 00:22:44.901 Number of Namespaces: 1024 00:22:44.901 Compare Command: Not Supported 00:22:44.901 Write Uncorrectable Command: Not Supported 00:22:44.901 Dataset Management Command: Supported 00:22:44.901 Write Zeroes Command: Supported 00:22:44.901 Set Features Save Field: Not Supported 00:22:44.901 Reservations: Not Supported 00:22:44.901 Timestamp: Not Supported 00:22:44.901 Copy: Not Supported 00:22:44.901 Volatile Write Cache: Present 00:22:44.901 Atomic Write Unit (Normal): 1 00:22:44.901 Atomic Write Unit (PFail): 1 00:22:44.901 Atomic Compare & Write Unit: 1 00:22:44.901 Fused Compare & Write: Not Supported 00:22:44.901 Scatter-Gather List 00:22:44.901 SGL Command Set: Supported 00:22:44.901 SGL Keyed: Not Supported 00:22:44.901 SGL Bit Bucket Descriptor: Not Supported 00:22:44.901 SGL Metadata Pointer: Not Supported 00:22:44.901 Oversized SGL: Not Supported 00:22:44.901 SGL Metadata Address: Not Supported 00:22:44.901 SGL Offset: Supported 00:22:44.901 Transport SGL Data Block: Not Supported 00:22:44.901 Replay Protected Memory Block: Not Supported 00:22:44.901 00:22:44.901 Firmware Slot Information 00:22:44.901 ========================= 00:22:44.901 Active slot: 0 00:22:44.901 00:22:44.901 Asymmetric Namespace Access 00:22:44.901 =========================== 00:22:44.901 Change Count : 0 00:22:44.901 Number of ANA Group Descriptors : 1 00:22:44.901 ANA Group Descriptor : 0 00:22:44.901 ANA Group ID : 1 00:22:44.901 Number of NSID Values : 1 00:22:44.901 Change Count : 0 00:22:44.901 ANA State : 1 00:22:44.901 Namespace Identifier : 1 00:22:44.901 00:22:44.901 Commands Supported and Effects 00:22:44.901 ============================== 00:22:44.901 Admin Commands 00:22:44.901 -------------- 00:22:44.901 Get Log Page (02h): Supported 00:22:44.901 Identify (06h): Supported 00:22:44.901 Abort (08h): Supported 00:22:44.901 Set Features (09h): Supported 00:22:44.901 Get Features (0Ah): Supported 00:22:44.901 Asynchronous Event Request (0Ch): Supported 00:22:44.901 Keep Alive (18h): Supported 00:22:44.901 I/O Commands 00:22:44.901 ------------ 00:22:44.901 Flush (00h): Supported 00:22:44.901 Write (01h): Supported LBA-Change 00:22:44.901 Read (02h): Supported 00:22:44.901 Write Zeroes (08h): Supported LBA-Change 00:22:44.901 Dataset Management (09h): Supported 00:22:44.901 00:22:44.901 Error Log 00:22:44.901 ========= 00:22:44.901 Entry: 0 00:22:44.901 Error Count: 0x3 00:22:44.901 Submission Queue Id: 0x0 00:22:44.901 Command Id: 0x5 00:22:44.901 Phase Bit: 0 00:22:44.901 Status Code: 0x2 00:22:44.901 Status Code Type: 0x0 00:22:44.901 Do Not Retry: 1 00:22:44.901 Error Location: 0x28 00:22:44.901 LBA: 0x0 00:22:44.901 Namespace: 0x0 00:22:44.901 Vendor Log Page: 0x0 00:22:44.901 ----------- 00:22:44.901 Entry: 1 00:22:44.901 Error Count: 0x2 00:22:44.901 Submission Queue Id: 0x0 00:22:44.901 Command Id: 0x5 00:22:44.901 Phase Bit: 0 00:22:44.901 Status Code: 0x2 00:22:44.901 Status Code Type: 0x0 00:22:44.901 Do Not Retry: 1 00:22:44.901 Error Location: 0x28 00:22:44.901 LBA: 0x0 00:22:44.901 Namespace: 0x0 00:22:44.901 Vendor Log Page: 0x0 00:22:44.901 ----------- 00:22:44.901 Entry: 2 00:22:44.901 Error Count: 0x1 00:22:44.901 Submission Queue Id: 0x0 00:22:44.901 Command Id: 0x4 00:22:44.901 Phase Bit: 0 00:22:44.901 Status Code: 0x2 00:22:44.901 Status Code Type: 0x0 00:22:44.901 Do Not Retry: 1 00:22:44.901 Error Location: 0x28 00:22:44.901 LBA: 0x0 00:22:44.901 Namespace: 0x0 00:22:44.901 Vendor Log Page: 0x0 00:22:44.901 00:22:44.901 Number of Queues 00:22:44.901 ================ 00:22:44.901 Number of I/O Submission Queues: 128 00:22:44.901 Number of I/O Completion Queues: 128 00:22:44.901 00:22:44.901 ZNS Specific Controller Data 00:22:44.901 ============================ 00:22:44.901 Zone Append Size Limit: 0 00:22:44.901 00:22:44.901 00:22:44.901 Active Namespaces 00:22:44.901 ================= 00:22:44.901 get_feature(0x05) failed 00:22:44.901 Namespace ID:1 00:22:44.901 Command Set Identifier: NVM (00h) 00:22:44.901 Deallocate: Supported 00:22:44.901 Deallocated/Unwritten Error: Not Supported 00:22:44.901 Deallocated Read Value: Unknown 00:22:44.901 Deallocate in Write Zeroes: Not Supported 00:22:44.901 Deallocated Guard Field: 0xFFFF 00:22:44.901 Flush: Supported 00:22:44.901 Reservation: Not Supported 00:22:44.901 Namespace Sharing Capabilities: Multiple Controllers 00:22:44.901 Size (in LBAs): 1953525168 (931GiB) 00:22:44.901 Capacity (in LBAs): 1953525168 (931GiB) 00:22:44.901 Utilization (in LBAs): 1953525168 (931GiB) 00:22:44.901 UUID: 77d578cc-dd29-43e6-8e5d-84414aa0da84 00:22:44.901 Thin Provisioning: Not Supported 00:22:44.901 Per-NS Atomic Units: Yes 00:22:44.901 Atomic Boundary Size (Normal): 0 00:22:44.901 Atomic Boundary Size (PFail): 0 00:22:44.901 Atomic Boundary Offset: 0 00:22:44.901 NGUID/EUI64 Never Reused: No 00:22:44.901 ANA group ID: 1 00:22:44.901 Namespace Write Protected: No 00:22:44.901 Number of LBA Formats: 1 00:22:44.901 Current LBA Format: LBA Format #00 00:22:44.901 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:44.901 00:22:44.901 04:13:59 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:44.901 04:13:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:44.901 04:13:59 -- nvmf/common.sh@117 -- # sync 00:22:44.901 04:13:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.901 04:13:59 -- nvmf/common.sh@120 -- # set +e 00:22:44.901 04:13:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.901 04:13:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.901 rmmod nvme_tcp 00:22:44.901 rmmod nvme_fabrics 00:22:45.161 04:13:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.161 04:13:59 -- nvmf/common.sh@124 -- # set -e 00:22:45.161 04:13:59 -- nvmf/common.sh@125 -- # return 0 00:22:45.161 04:13:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:45.161 04:13:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:45.161 04:13:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:45.161 04:13:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:45.161 04:13:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.161 04:13:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.161 04:13:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.161 04:13:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.161 04:13:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.067 04:14:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.067 04:14:01 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:47.067 04:14:01 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:47.067 04:14:01 -- nvmf/common.sh@675 -- # echo 0 00:22:47.067 04:14:01 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.067 04:14:01 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:47.067 04:14:01 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:47.067 04:14:01 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.067 04:14:01 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:47.067 04:14:01 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:47.067 04:14:01 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:49.602 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.602 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.861 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:50.800 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:22:50.800 00:22:50.800 real 0m16.028s 00:22:50.800 user 0m3.762s 00:22:50.800 sys 0m8.319s 00:22:50.800 04:14:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:50.800 04:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.800 ************************************ 00:22:50.800 END TEST nvmf_identify_kernel_target 00:22:50.800 ************************************ 00:22:50.800 04:14:05 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:50.800 04:14:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:50.800 04:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:50.800 04:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.060 ************************************ 00:22:51.060 START TEST nvmf_auth 00:22:51.060 ************************************ 00:22:51.060 04:14:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:51.060 * Looking for test storage... 00:22:51.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.060 04:14:05 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.060 04:14:05 -- nvmf/common.sh@7 -- # uname -s 00:22:51.060 04:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.060 04:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.060 04:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.060 04:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.060 04:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.060 04:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.060 04:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.060 04:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.060 04:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.060 04:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.060 04:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:51.060 04:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:51.060 04:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.060 04:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.060 04:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.060 04:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.060 04:14:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.060 04:14:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.060 04:14:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.060 04:14:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.060 04:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.060 04:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.060 04:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.060 04:14:05 -- paths/export.sh@5 -- # export PATH 00:22:51.060 04:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.060 04:14:05 -- nvmf/common.sh@47 -- # : 0 00:22:51.060 04:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.060 04:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.060 04:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.061 04:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.061 04:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.061 04:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.061 04:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.061 04:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.061 04:14:05 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:51.061 04:14:05 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:51.061 04:14:05 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:51.061 04:14:05 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:51.061 04:14:05 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.061 04:14:05 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.061 04:14:05 -- host/auth.sh@21 -- # keys=() 00:22:51.061 04:14:05 -- host/auth.sh@77 -- # nvmftestinit 00:22:51.061 04:14:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:51.061 04:14:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.061 04:14:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:51.061 04:14:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:51.061 04:14:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:51.061 04:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.061 04:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.061 04:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.061 04:14:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:51.061 04:14:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:51.061 04:14:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.061 04:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:56.337 04:14:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:56.337 04:14:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.337 04:14:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.337 04:14:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.337 04:14:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.337 04:14:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.337 04:14:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.337 04:14:10 -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.337 04:14:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.337 04:14:10 -- nvmf/common.sh@296 -- # e810=() 00:22:56.337 04:14:10 -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.337 04:14:10 -- nvmf/common.sh@297 -- # x722=() 00:22:56.337 04:14:10 -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.337 04:14:10 -- nvmf/common.sh@298 -- # mlx=() 00:22:56.337 04:14:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.337 04:14:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.337 04:14:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.337 04:14:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.337 04:14:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.337 04:14:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:56.337 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:56.337 04:14:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.337 04:14:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:56.337 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:56.337 04:14:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.337 04:14:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.337 04:14:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.337 04:14:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:56.337 Found net devices under 0000:af:00.0: cvl_0_0 00:22:56.337 04:14:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.337 04:14:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.337 04:14:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.337 04:14:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.337 04:14:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:56.337 Found net devices under 0000:af:00.1: cvl_0_1 00:22:56.337 04:14:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.337 04:14:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:56.337 04:14:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:56.337 04:14:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:56.337 04:14:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.337 04:14:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.337 04:14:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.337 04:14:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.337 04:14:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.337 04:14:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.337 04:14:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.337 04:14:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.337 04:14:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.337 04:14:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.337 04:14:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.337 04:14:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.337 04:14:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.597 04:14:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.597 04:14:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.597 04:14:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.597 04:14:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.597 04:14:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.597 04:14:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.597 04:14:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:56.597 00:22:56.597 --- 10.0.0.2 ping statistics --- 00:22:56.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.597 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:56.597 04:14:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:56.597 00:22:56.597 --- 10.0.0.1 ping statistics --- 00:22:56.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.597 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:56.597 04:14:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.597 04:14:11 -- nvmf/common.sh@411 -- # return 0 00:22:56.597 04:14:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:56.597 04:14:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.597 04:14:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:56.597 04:14:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:56.597 04:14:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.597 04:14:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:56.597 04:14:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:56.597 04:14:11 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:22:56.597 04:14:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:56.597 04:14:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:56.597 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:56.597 04:14:11 -- nvmf/common.sh@470 -- # nvmfpid=3934234 00:22:56.597 04:14:11 -- nvmf/common.sh@471 -- # waitforlisten 3934234 00:22:56.597 04:14:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:56.597 04:14:11 -- common/autotest_common.sh@817 -- # '[' -z 3934234 ']' 00:22:56.597 04:14:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.597 04:14:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:56.597 04:14:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.597 04:14:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:56.597 04:14:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.974 04:14:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:57.974 04:14:12 -- common/autotest_common.sh@850 -- # return 0 00:22:57.974 04:14:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:57.974 04:14:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.974 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:57.974 04:14:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.974 04:14:12 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:57.974 04:14:12 -- host/auth.sh@81 -- # gen_key null 32 00:22:57.974 04:14:12 -- host/auth.sh@53 -- # local digest len file key 00:22:57.974 04:14:12 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.974 04:14:12 -- host/auth.sh@54 -- # local -A digests 00:22:57.974 04:14:12 -- host/auth.sh@56 -- # digest=null 00:22:57.974 04:14:12 -- host/auth.sh@56 -- # len=32 00:22:57.974 04:14:12 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.974 04:14:12 -- host/auth.sh@57 -- # key=ef2e53dd249203e0192cf3e0cbec9e7e 00:22:57.974 04:14:12 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.iwe 00:22:57.975 04:14:12 -- host/auth.sh@59 -- # format_dhchap_key ef2e53dd249203e0192cf3e0cbec9e7e 0 00:22:57.975 04:14:12 -- nvmf/common.sh@708 -- # format_key DHHC-1 ef2e53dd249203e0192cf3e0cbec9e7e 0 00:22:57.975 04:14:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # key=ef2e53dd249203e0192cf3e0cbec9e7e 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # digest=0 00:22:57.975 04:14:12 -- nvmf/common.sh@694 -- # python - 00:22:57.975 04:14:12 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.iwe 00:22:57.975 04:14:12 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.iwe 00:22:57.975 04:14:12 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.iwe 00:22:57.975 04:14:12 -- host/auth.sh@82 -- # gen_key null 48 00:22:57.975 04:14:12 -- host/auth.sh@53 -- # local digest len file key 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # local -A digests 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # digest=null 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # len=48 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # key=43cb9be9c57ee6a6c7b622b20d36c972e48aabdc08ff789a 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.YuR 00:22:57.975 04:14:12 -- host/auth.sh@59 -- # format_dhchap_key 43cb9be9c57ee6a6c7b622b20d36c972e48aabdc08ff789a 0 00:22:57.975 04:14:12 -- nvmf/common.sh@708 -- # format_key DHHC-1 43cb9be9c57ee6a6c7b622b20d36c972e48aabdc08ff789a 0 00:22:57.975 04:14:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # key=43cb9be9c57ee6a6c7b622b20d36c972e48aabdc08ff789a 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # digest=0 00:22:57.975 04:14:12 -- nvmf/common.sh@694 -- # python - 00:22:57.975 04:14:12 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.YuR 00:22:57.975 04:14:12 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.YuR 00:22:57.975 04:14:12 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.YuR 00:22:57.975 04:14:12 -- host/auth.sh@83 -- # gen_key sha256 32 00:22:57.975 04:14:12 -- host/auth.sh@53 -- # local digest len file key 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # local -A digests 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # digest=sha256 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # len=32 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # key=84634137a03d6dbec4892045c793ad1f 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.qA0 00:22:57.975 04:14:12 -- host/auth.sh@59 -- # format_dhchap_key 84634137a03d6dbec4892045c793ad1f 1 00:22:57.975 04:14:12 -- nvmf/common.sh@708 -- # format_key DHHC-1 84634137a03d6dbec4892045c793ad1f 1 00:22:57.975 04:14:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # key=84634137a03d6dbec4892045c793ad1f 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # digest=1 00:22:57.975 04:14:12 -- nvmf/common.sh@694 -- # python - 00:22:57.975 04:14:12 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.qA0 00:22:57.975 04:14:12 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.qA0 00:22:57.975 04:14:12 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.qA0 00:22:57.975 04:14:12 -- host/auth.sh@84 -- # gen_key sha384 48 00:22:57.975 04:14:12 -- host/auth.sh@53 -- # local digest len file key 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # local -A digests 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # digest=sha384 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # len=48 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # key=37f19795cef05c78bee7caaa35c59bc533c40bc88e7de952 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.ocb 00:22:57.975 04:14:12 -- host/auth.sh@59 -- # format_dhchap_key 37f19795cef05c78bee7caaa35c59bc533c40bc88e7de952 2 00:22:57.975 04:14:12 -- nvmf/common.sh@708 -- # format_key DHHC-1 37f19795cef05c78bee7caaa35c59bc533c40bc88e7de952 2 00:22:57.975 04:14:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # key=37f19795cef05c78bee7caaa35c59bc533c40bc88e7de952 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # digest=2 00:22:57.975 04:14:12 -- nvmf/common.sh@694 -- # python - 00:22:57.975 04:14:12 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.ocb 00:22:57.975 04:14:12 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.ocb 00:22:57.975 04:14:12 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.ocb 00:22:57.975 04:14:12 -- host/auth.sh@85 -- # gen_key sha512 64 00:22:57.975 04:14:12 -- host/auth.sh@53 -- # local digest len file key 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.975 04:14:12 -- host/auth.sh@54 -- # local -A digests 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # digest=sha512 00:22:57.975 04:14:12 -- host/auth.sh@56 -- # len=64 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:57.975 04:14:12 -- host/auth.sh@57 -- # key=2ce6a7fa52b40aae8ea13e92ee5f74bfdbbdd8b8ab6cdf8b0f17991371b97ea6 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:22:57.975 04:14:12 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.vPJ 00:22:57.975 04:14:12 -- host/auth.sh@59 -- # format_dhchap_key 2ce6a7fa52b40aae8ea13e92ee5f74bfdbbdd8b8ab6cdf8b0f17991371b97ea6 3 00:22:57.975 04:14:12 -- nvmf/common.sh@708 -- # format_key DHHC-1 2ce6a7fa52b40aae8ea13e92ee5f74bfdbbdd8b8ab6cdf8b0f17991371b97ea6 3 00:22:57.975 04:14:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # key=2ce6a7fa52b40aae8ea13e92ee5f74bfdbbdd8b8ab6cdf8b0f17991371b97ea6 00:22:57.975 04:14:12 -- nvmf/common.sh@693 -- # digest=3 00:22:57.975 04:14:12 -- nvmf/common.sh@694 -- # python - 00:22:57.975 04:14:12 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.vPJ 00:22:57.975 04:14:12 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.vPJ 00:22:57.975 04:14:12 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.vPJ 00:22:57.975 04:14:12 -- host/auth.sh@87 -- # waitforlisten 3934234 00:22:57.975 04:14:12 -- common/autotest_common.sh@817 -- # '[' -z 3934234 ']' 00:22:57.975 04:14:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.975 04:14:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.975 04:14:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.975 04:14:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.975 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.234 04:14:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.234 04:14:12 -- common/autotest_common.sh@850 -- # return 0 00:22:58.234 04:14:12 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:58.234 04:14:12 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iwe 00:22:58.234 04:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.234 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.234 04:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.234 04:14:12 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:58.234 04:14:12 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YuR 00:22:58.234 04:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.234 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.234 04:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.234 04:14:12 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:58.235 04:14:12 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qA0 00:22:58.235 04:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.235 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.235 04:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.235 04:14:12 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:58.235 04:14:12 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ocb 00:22:58.235 04:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.235 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.235 04:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.235 04:14:12 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:58.235 04:14:12 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vPJ 00:22:58.235 04:14:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.235 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.235 04:14:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.235 04:14:12 -- host/auth.sh@92 -- # nvmet_auth_init 00:22:58.235 04:14:12 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:58.235 04:14:12 -- nvmf/common.sh@717 -- # local ip 00:22:58.235 04:14:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:58.235 04:14:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:58.235 04:14:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.235 04:14:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.235 04:14:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:58.235 04:14:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.235 04:14:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:58.235 04:14:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:58.235 04:14:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:58.235 04:14:12 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:58.235 04:14:12 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:58.235 04:14:12 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.235 04:14:12 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.235 04:14:12 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:58.235 04:14:12 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.235 04:14:12 -- nvmf/common.sh@628 -- # local block nvme 00:22:58.235 04:14:12 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.235 04:14:12 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:58.494 04:14:12 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.494 04:14:12 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:01.029 Waiting for block devices as requested 00:23:01.029 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:23:01.288 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:01.288 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:01.288 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:01.288 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:01.547 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:01.547 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:01.547 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:01.806 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:01.806 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:01.806 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:01.806 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:02.065 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:02.065 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:02.065 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:02.324 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:02.324 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:02.892 04:14:17 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:02.892 04:14:17 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:02.892 04:14:17 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:02.892 04:14:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:02.892 04:14:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:02.892 04:14:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:02.892 04:14:17 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:02.892 04:14:17 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:02.892 04:14:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:02.892 No valid GPT data, bailing 00:23:02.892 04:14:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:02.892 04:14:17 -- scripts/common.sh@391 -- # pt= 00:23:02.892 04:14:17 -- scripts/common.sh@392 -- # return 1 00:23:02.892 04:14:17 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:02.892 04:14:17 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:02.892 04:14:17 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.892 04:14:17 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:02.892 04:14:17 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:02.892 04:14:17 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:02.892 04:14:17 -- nvmf/common.sh@656 -- # echo 1 00:23:02.892 04:14:17 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:02.892 04:14:17 -- nvmf/common.sh@658 -- # echo 1 00:23:02.892 04:14:17 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:02.892 04:14:17 -- nvmf/common.sh@661 -- # echo tcp 00:23:02.892 04:14:17 -- nvmf/common.sh@662 -- # echo 4420 00:23:02.892 04:14:17 -- nvmf/common.sh@663 -- # echo ipv4 00:23:02.892 04:14:17 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:02.892 04:14:17 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:23:02.892 00:23:02.892 Discovery Log Number of Records 2, Generation counter 2 00:23:02.892 =====Discovery Log Entry 0====== 00:23:02.892 trtype: tcp 00:23:02.892 adrfam: ipv4 00:23:02.892 subtype: current discovery subsystem 00:23:02.892 treq: not specified, sq flow control disable supported 00:23:02.892 portid: 1 00:23:02.892 trsvcid: 4420 00:23:02.892 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:02.892 traddr: 10.0.0.1 00:23:02.892 eflags: none 00:23:02.892 sectype: none 00:23:02.892 =====Discovery Log Entry 1====== 00:23:02.892 trtype: tcp 00:23:02.892 adrfam: ipv4 00:23:02.892 subtype: nvme subsystem 00:23:02.892 treq: not specified, sq flow control disable supported 00:23:02.892 portid: 1 00:23:02.892 trsvcid: 4420 00:23:02.892 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:02.892 traddr: 10.0.0.1 00:23:02.892 eflags: none 00:23:02.892 sectype: none 00:23:02.892 04:14:17 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:02.892 04:14:17 -- host/auth.sh@37 -- # echo 0 00:23:02.892 04:14:17 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:02.892 04:14:17 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.892 04:14:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:02.892 04:14:17 -- host/auth.sh@44 -- # digest=sha256 00:23:02.892 04:14:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.892 04:14:17 -- host/auth.sh@44 -- # keyid=1 00:23:02.892 04:14:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:02.892 04:14:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:02.892 04:14:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:02.892 04:14:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:02.892 04:14:17 -- host/auth.sh@100 -- # IFS=, 00:23:02.892 04:14:17 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:23:03.152 04:14:17 -- host/auth.sh@100 -- # IFS=, 00:23:03.152 04:14:17 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.152 04:14:17 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:03.152 04:14:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.152 04:14:17 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:23:03.152 04:14:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.152 04:14:17 -- host/auth.sh@68 -- # keyid=1 00:23:03.152 04:14:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.152 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.152 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.152 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.152 04:14:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.152 04:14:17 -- nvmf/common.sh@717 -- # local ip 00:23:03.152 04:14:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.152 04:14:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.152 04:14:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.152 04:14:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.152 04:14:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:03.152 04:14:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.152 04:14:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:03.152 04:14:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:03.152 04:14:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:03.152 04:14:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:03.152 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.152 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.152 nvme0n1 00:23:03.152 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.152 04:14:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.152 04:14:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.152 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.152 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.152 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.152 04:14:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.152 04:14:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.152 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.152 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.152 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.152 04:14:17 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:03.152 04:14:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.152 04:14:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.152 04:14:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:03.152 04:14:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.152 04:14:17 -- host/auth.sh@44 -- # digest=sha256 00:23:03.152 04:14:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.153 04:14:17 -- host/auth.sh@44 -- # keyid=0 00:23:03.153 04:14:17 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:03.153 04:14:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:03.153 04:14:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:03.153 04:14:17 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:03.153 04:14:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:23:03.153 04:14:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.153 04:14:17 -- host/auth.sh@68 -- # digest=sha256 00:23:03.153 04:14:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:03.153 04:14:17 -- host/auth.sh@68 -- # keyid=0 00:23:03.153 04:14:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.153 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.153 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.153 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.153 04:14:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.153 04:14:17 -- nvmf/common.sh@717 -- # local ip 00:23:03.153 04:14:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.153 04:14:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.153 04:14:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.153 04:14:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.153 04:14:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:03.153 04:14:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.153 04:14:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:03.153 04:14:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:03.153 04:14:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:03.153 04:14:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:03.153 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.153 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.442 nvme0n1 00:23:03.442 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.442 04:14:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.442 04:14:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.442 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.442 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.442 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.442 04:14:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.442 04:14:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.442 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.442 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.442 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.442 04:14:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.442 04:14:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:03.442 04:14:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.442 04:14:17 -- host/auth.sh@44 -- # digest=sha256 00:23:03.442 04:14:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.442 04:14:17 -- host/auth.sh@44 -- # keyid=1 00:23:03.442 04:14:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:03.442 04:14:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:03.442 04:14:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:03.442 04:14:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:03.442 04:14:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:23:03.442 04:14:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.442 04:14:17 -- host/auth.sh@68 -- # digest=sha256 00:23:03.442 04:14:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:03.442 04:14:17 -- host/auth.sh@68 -- # keyid=1 00:23:03.442 04:14:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.442 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.442 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.442 04:14:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.442 04:14:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.442 04:14:17 -- nvmf/common.sh@717 -- # local ip 00:23:03.442 04:14:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.442 04:14:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.442 04:14:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.442 04:14:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.442 04:14:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:03.442 04:14:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.442 04:14:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:03.442 04:14:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:03.442 04:14:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:03.442 04:14:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:03.442 04:14:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.442 04:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.702 nvme0n1 00:23:03.702 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.702 04:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.702 04:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.702 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.702 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.702 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.702 04:14:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.702 04:14:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.702 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.702 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.702 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.702 04:14:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.702 04:14:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:03.702 04:14:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.702 04:14:18 -- host/auth.sh@44 -- # digest=sha256 00:23:03.702 04:14:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.702 04:14:18 -- host/auth.sh@44 -- # keyid=2 00:23:03.702 04:14:18 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:03.702 04:14:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:03.702 04:14:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:03.702 04:14:18 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:03.702 04:14:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:23:03.702 04:14:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.702 04:14:18 -- host/auth.sh@68 -- # digest=sha256 00:23:03.702 04:14:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:03.702 04:14:18 -- host/auth.sh@68 -- # keyid=2 00:23:03.702 04:14:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.702 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.702 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.702 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.702 04:14:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.702 04:14:18 -- nvmf/common.sh@717 -- # local ip 00:23:03.702 04:14:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.702 04:14:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.702 04:14:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.702 04:14:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.702 04:14:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:03.702 04:14:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.702 04:14:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:03.702 04:14:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:03.702 04:14:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:03.702 04:14:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:03.702 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.702 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 nvme0n1 00:23:03.961 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.961 04:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.961 04:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.961 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.961 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.961 04:14:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.961 04:14:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.961 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.961 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.961 04:14:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.961 04:14:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:03.961 04:14:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.961 04:14:18 -- host/auth.sh@44 -- # digest=sha256 00:23:03.961 04:14:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.961 04:14:18 -- host/auth.sh@44 -- # keyid=3 00:23:03.961 04:14:18 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:03.961 04:14:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:03.961 04:14:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:03.961 04:14:18 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:03.961 04:14:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:23:03.961 04:14:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.961 04:14:18 -- host/auth.sh@68 -- # digest=sha256 00:23:03.961 04:14:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:03.961 04:14:18 -- host/auth.sh@68 -- # keyid=3 00:23:03.961 04:14:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.961 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.961 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:03.961 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.961 04:14:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.961 04:14:18 -- nvmf/common.sh@717 -- # local ip 00:23:03.961 04:14:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.962 04:14:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.962 04:14:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.962 04:14:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.962 04:14:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:03.962 04:14:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.962 04:14:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:03.962 04:14:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:03.962 04:14:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:03.962 04:14:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:03.962 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.962 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 nvme0n1 00:23:04.221 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.221 04:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.221 04:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.221 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.221 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.221 04:14:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.221 04:14:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.221 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.221 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.221 04:14:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:04.221 04:14:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:04.221 04:14:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:04.221 04:14:18 -- host/auth.sh@44 -- # digest=sha256 00:23:04.221 04:14:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.221 04:14:18 -- host/auth.sh@44 -- # keyid=4 00:23:04.221 04:14:18 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:04.221 04:14:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:04.221 04:14:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:04.221 04:14:18 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:04.221 04:14:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:23:04.221 04:14:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:04.221 04:14:18 -- host/auth.sh@68 -- # digest=sha256 00:23:04.221 04:14:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:04.221 04:14:18 -- host/auth.sh@68 -- # keyid=4 00:23:04.221 04:14:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:04.221 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.221 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.221 04:14:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:04.221 04:14:18 -- nvmf/common.sh@717 -- # local ip 00:23:04.221 04:14:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:04.221 04:14:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:04.221 04:14:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.221 04:14:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.221 04:14:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:04.221 04:14:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.221 04:14:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:04.221 04:14:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:04.221 04:14:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:04.221 04:14:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.221 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.221 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.221 nvme0n1 00:23:04.221 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.222 04:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.222 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.222 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.222 04:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.222 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.481 04:14:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.481 04:14:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.481 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.481 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.481 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.481 04:14:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.481 04:14:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:04.481 04:14:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:04.481 04:14:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:04.481 04:14:18 -- host/auth.sh@44 -- # digest=sha256 00:23:04.481 04:14:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.481 04:14:18 -- host/auth.sh@44 -- # keyid=0 00:23:04.481 04:14:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:04.481 04:14:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:04.481 04:14:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:04.481 04:14:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:04.481 04:14:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:23:04.481 04:14:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:04.481 04:14:18 -- host/auth.sh@68 -- # digest=sha256 00:23:04.481 04:14:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:04.481 04:14:18 -- host/auth.sh@68 -- # keyid=0 00:23:04.481 04:14:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.481 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.481 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.481 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.481 04:14:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:04.481 04:14:18 -- nvmf/common.sh@717 -- # local ip 00:23:04.481 04:14:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:04.481 04:14:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:04.481 04:14:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.481 04:14:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.481 04:14:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:04.481 04:14:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.481 04:14:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:04.481 04:14:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:04.481 04:14:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:04.481 04:14:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:04.481 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.481 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.481 nvme0n1 00:23:04.481 04:14:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.481 04:14:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.481 04:14:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.481 04:14:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.481 04:14:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.481 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.741 04:14:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.741 04:14:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.741 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.741 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.741 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.741 04:14:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:04.741 04:14:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:04.741 04:14:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:04.741 04:14:19 -- host/auth.sh@44 -- # digest=sha256 00:23:04.741 04:14:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.741 04:14:19 -- host/auth.sh@44 -- # keyid=1 00:23:04.741 04:14:19 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:04.741 04:14:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:04.741 04:14:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:04.741 04:14:19 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:04.741 04:14:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:23:04.741 04:14:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:04.741 04:14:19 -- host/auth.sh@68 -- # digest=sha256 00:23:04.741 04:14:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:04.741 04:14:19 -- host/auth.sh@68 -- # keyid=1 00:23:04.741 04:14:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.741 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.741 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.741 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.741 04:14:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:04.741 04:14:19 -- nvmf/common.sh@717 -- # local ip 00:23:04.741 04:14:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:04.741 04:14:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:04.741 04:14:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.741 04:14:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.741 04:14:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:04.741 04:14:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.741 04:14:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:04.741 04:14:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:04.741 04:14:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:04.741 04:14:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:04.741 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.741 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.741 nvme0n1 00:23:04.741 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.741 04:14:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.741 04:14:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.741 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.741 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.000 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.000 04:14:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.000 04:14:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.000 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.000 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.000 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.000 04:14:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.000 04:14:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:05.000 04:14:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.000 04:14:19 -- host/auth.sh@44 -- # digest=sha256 00:23:05.000 04:14:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.000 04:14:19 -- host/auth.sh@44 -- # keyid=2 00:23:05.000 04:14:19 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:05.000 04:14:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:05.000 04:14:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:05.000 04:14:19 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:05.000 04:14:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:23:05.000 04:14:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.000 04:14:19 -- host/auth.sh@68 -- # digest=sha256 00:23:05.000 04:14:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:05.000 04:14:19 -- host/auth.sh@68 -- # keyid=2 00:23:05.000 04:14:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.000 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.000 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.000 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.000 04:14:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.000 04:14:19 -- nvmf/common.sh@717 -- # local ip 00:23:05.000 04:14:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.000 04:14:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.000 04:14:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.001 04:14:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.001 04:14:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:05.001 04:14:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.001 04:14:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:05.001 04:14:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:05.001 04:14:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:05.001 04:14:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:05.001 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.001 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.001 nvme0n1 00:23:05.001 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.001 04:14:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.001 04:14:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.001 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.001 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.260 04:14:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.260 04:14:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.260 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.260 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.260 04:14:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.260 04:14:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:05.260 04:14:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.260 04:14:19 -- host/auth.sh@44 -- # digest=sha256 00:23:05.260 04:14:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.260 04:14:19 -- host/auth.sh@44 -- # keyid=3 00:23:05.260 04:14:19 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:05.260 04:14:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:05.260 04:14:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:05.260 04:14:19 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:05.260 04:14:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:23:05.260 04:14:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.260 04:14:19 -- host/auth.sh@68 -- # digest=sha256 00:23:05.260 04:14:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:05.260 04:14:19 -- host/auth.sh@68 -- # keyid=3 00:23:05.260 04:14:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.260 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.260 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.260 04:14:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.260 04:14:19 -- nvmf/common.sh@717 -- # local ip 00:23:05.260 04:14:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.260 04:14:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.260 04:14:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.260 04:14:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.260 04:14:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:05.260 04:14:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.260 04:14:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:05.260 04:14:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:05.260 04:14:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:05.260 04:14:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:05.260 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.260 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.260 nvme0n1 00:23:05.260 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.520 04:14:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.520 04:14:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.520 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.520 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.520 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.520 04:14:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.520 04:14:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.520 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.520 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.520 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.520 04:14:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.520 04:14:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:05.520 04:14:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.520 04:14:19 -- host/auth.sh@44 -- # digest=sha256 00:23:05.520 04:14:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.520 04:14:19 -- host/auth.sh@44 -- # keyid=4 00:23:05.520 04:14:19 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:05.520 04:14:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:05.520 04:14:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:05.520 04:14:19 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:05.520 04:14:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:23:05.520 04:14:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.520 04:14:19 -- host/auth.sh@68 -- # digest=sha256 00:23:05.520 04:14:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:05.520 04:14:19 -- host/auth.sh@68 -- # keyid=4 00:23:05.520 04:14:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.520 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.520 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.520 04:14:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.520 04:14:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.520 04:14:19 -- nvmf/common.sh@717 -- # local ip 00:23:05.520 04:14:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.520 04:14:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.520 04:14:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.520 04:14:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.520 04:14:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:05.520 04:14:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.520 04:14:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:05.520 04:14:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:05.520 04:14:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:05.520 04:14:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.520 04:14:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.520 04:14:19 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 nvme0n1 00:23:05.780 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 04:14:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.780 04:14:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.780 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 04:14:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.780 04:14:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.780 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 04:14:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.780 04:14:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.780 04:14:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:05.780 04:14:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.780 04:14:20 -- host/auth.sh@44 -- # digest=sha256 00:23:05.780 04:14:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:05.780 04:14:20 -- host/auth.sh@44 -- # keyid=0 00:23:05.780 04:14:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:05.780 04:14:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:05.780 04:14:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:05.780 04:14:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:05.780 04:14:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:23:05.780 04:14:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.780 04:14:20 -- host/auth.sh@68 -- # digest=sha256 00:23:05.780 04:14:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:05.780 04:14:20 -- host/auth.sh@68 -- # keyid=0 00:23:05.780 04:14:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:05.780 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:05.780 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.780 04:14:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.780 04:14:20 -- nvmf/common.sh@717 -- # local ip 00:23:05.780 04:14:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.780 04:14:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.780 04:14:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.780 04:14:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.780 04:14:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:05.780 04:14:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.780 04:14:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:05.780 04:14:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:05.780 04:14:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:05.780 04:14:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:05.780 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.780 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.040 nvme0n1 00:23:06.040 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.040 04:14:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.040 04:14:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:06.040 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.040 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.040 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.040 04:14:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.040 04:14:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.040 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.040 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.040 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.040 04:14:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:06.040 04:14:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:06.040 04:14:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:06.040 04:14:20 -- host/auth.sh@44 -- # digest=sha256 00:23:06.040 04:14:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.040 04:14:20 -- host/auth.sh@44 -- # keyid=1 00:23:06.040 04:14:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:06.040 04:14:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:06.040 04:14:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:06.040 04:14:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:06.040 04:14:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:23:06.040 04:14:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:06.040 04:14:20 -- host/auth.sh@68 -- # digest=sha256 00:23:06.040 04:14:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:06.040 04:14:20 -- host/auth.sh@68 -- # keyid=1 00:23:06.040 04:14:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.040 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.040 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.040 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.040 04:14:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:06.040 04:14:20 -- nvmf/common.sh@717 -- # local ip 00:23:06.040 04:14:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:06.040 04:14:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:06.040 04:14:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.040 04:14:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.040 04:14:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:06.040 04:14:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.040 04:14:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:06.040 04:14:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:06.040 04:14:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:06.040 04:14:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:06.040 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.040 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.300 nvme0n1 00:23:06.300 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.300 04:14:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.300 04:14:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:06.300 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.300 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.300 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.300 04:14:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.300 04:14:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.300 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.300 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.300 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.300 04:14:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:06.300 04:14:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:06.300 04:14:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:06.300 04:14:20 -- host/auth.sh@44 -- # digest=sha256 00:23:06.300 04:14:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.300 04:14:20 -- host/auth.sh@44 -- # keyid=2 00:23:06.300 04:14:20 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:06.300 04:14:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:06.300 04:14:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:06.300 04:14:20 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:06.300 04:14:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:23:06.300 04:14:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:06.300 04:14:20 -- host/auth.sh@68 -- # digest=sha256 00:23:06.300 04:14:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:06.300 04:14:20 -- host/auth.sh@68 -- # keyid=2 00:23:06.300 04:14:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.300 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.300 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.559 04:14:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.559 04:14:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:06.559 04:14:20 -- nvmf/common.sh@717 -- # local ip 00:23:06.559 04:14:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:06.559 04:14:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:06.559 04:14:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.559 04:14:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.559 04:14:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:06.559 04:14:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.559 04:14:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:06.559 04:14:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:06.559 04:14:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:06.559 04:14:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:06.559 04:14:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.559 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:06.818 nvme0n1 00:23:06.818 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.818 04:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.818 04:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:06.818 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.818 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:06.818 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.818 04:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.818 04:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.818 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.819 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:06.819 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.819 04:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:06.819 04:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:06.819 04:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:06.819 04:14:21 -- host/auth.sh@44 -- # digest=sha256 00:23:06.819 04:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.819 04:14:21 -- host/auth.sh@44 -- # keyid=3 00:23:06.819 04:14:21 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:06.819 04:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:06.819 04:14:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:06.819 04:14:21 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:06.819 04:14:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:23:06.819 04:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:06.819 04:14:21 -- host/auth.sh@68 -- # digest=sha256 00:23:06.819 04:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:06.819 04:14:21 -- host/auth.sh@68 -- # keyid=3 00:23:06.819 04:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.819 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.819 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:06.819 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.819 04:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:06.819 04:14:21 -- nvmf/common.sh@717 -- # local ip 00:23:06.819 04:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:06.819 04:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:06.819 04:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.819 04:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.819 04:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:06.819 04:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.819 04:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:06.819 04:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:06.819 04:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:06.819 04:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:06.819 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.819 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 nvme0n1 00:23:07.078 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.078 04:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.078 04:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:07.078 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.078 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.078 04:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.078 04:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.078 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.078 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.078 04:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:07.078 04:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:07.078 04:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:07.078 04:14:21 -- host/auth.sh@44 -- # digest=sha256 00:23:07.078 04:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.078 04:14:21 -- host/auth.sh@44 -- # keyid=4 00:23:07.078 04:14:21 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:07.078 04:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:07.078 04:14:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:07.078 04:14:21 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:07.078 04:14:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:23:07.078 04:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:07.078 04:14:21 -- host/auth.sh@68 -- # digest=sha256 00:23:07.078 04:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:07.078 04:14:21 -- host/auth.sh@68 -- # keyid=4 00:23:07.078 04:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.078 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.078 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.078 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.078 04:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:07.078 04:14:21 -- nvmf/common.sh@717 -- # local ip 00:23:07.078 04:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:07.078 04:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:07.078 04:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.078 04:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.078 04:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:07.078 04:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.078 04:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:07.078 04:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:07.078 04:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:07.078 04:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.078 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.078 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.337 nvme0n1 00:23:07.337 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.337 04:14:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:07.337 04:14:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.337 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.337 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.337 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.337 04:14:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.337 04:14:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.337 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.337 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.596 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.596 04:14:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.596 04:14:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:07.596 04:14:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:07.596 04:14:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:07.596 04:14:21 -- host/auth.sh@44 -- # digest=sha256 00:23:07.596 04:14:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.596 04:14:21 -- host/auth.sh@44 -- # keyid=0 00:23:07.596 04:14:21 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:07.596 04:14:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:07.596 04:14:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:07.596 04:14:21 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:07.596 04:14:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:23:07.596 04:14:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:07.596 04:14:21 -- host/auth.sh@68 -- # digest=sha256 00:23:07.596 04:14:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:07.596 04:14:21 -- host/auth.sh@68 -- # keyid=0 00:23:07.596 04:14:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.596 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.596 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.596 04:14:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.596 04:14:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:07.596 04:14:21 -- nvmf/common.sh@717 -- # local ip 00:23:07.596 04:14:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:07.596 04:14:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:07.596 04:14:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.596 04:14:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.596 04:14:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:07.596 04:14:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.596 04:14:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:07.596 04:14:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:07.596 04:14:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:07.596 04:14:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:07.596 04:14:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.596 04:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.855 nvme0n1 00:23:07.855 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.855 04:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:07.855 04:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.855 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.855 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.855 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.855 04:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.855 04:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.855 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.855 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.114 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.114 04:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:08.114 04:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:08.114 04:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:08.114 04:14:22 -- host/auth.sh@44 -- # digest=sha256 00:23:08.114 04:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.114 04:14:22 -- host/auth.sh@44 -- # keyid=1 00:23:08.114 04:14:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:08.114 04:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:08.114 04:14:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:08.114 04:14:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:08.114 04:14:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:23:08.114 04:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:08.114 04:14:22 -- host/auth.sh@68 -- # digest=sha256 00:23:08.114 04:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:08.114 04:14:22 -- host/auth.sh@68 -- # keyid=1 00:23:08.114 04:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.114 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.114 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.114 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.114 04:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:08.114 04:14:22 -- nvmf/common.sh@717 -- # local ip 00:23:08.114 04:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:08.114 04:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:08.114 04:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.114 04:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.114 04:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:08.114 04:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.114 04:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:08.114 04:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:08.114 04:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:08.114 04:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:08.114 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.114 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.373 nvme0n1 00:23:08.373 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.373 04:14:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:08.373 04:14:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.373 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.373 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.373 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.632 04:14:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.632 04:14:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.632 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.632 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.632 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.632 04:14:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:08.632 04:14:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:08.632 04:14:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:08.632 04:14:22 -- host/auth.sh@44 -- # digest=sha256 00:23:08.632 04:14:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.632 04:14:22 -- host/auth.sh@44 -- # keyid=2 00:23:08.632 04:14:22 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:08.632 04:14:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:08.632 04:14:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:08.632 04:14:22 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:08.632 04:14:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:23:08.632 04:14:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:08.632 04:14:22 -- host/auth.sh@68 -- # digest=sha256 00:23:08.632 04:14:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:08.632 04:14:22 -- host/auth.sh@68 -- # keyid=2 00:23:08.632 04:14:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.632 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.632 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:08.632 04:14:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.632 04:14:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:08.632 04:14:22 -- nvmf/common.sh@717 -- # local ip 00:23:08.632 04:14:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:08.632 04:14:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:08.632 04:14:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.632 04:14:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.632 04:14:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:08.632 04:14:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.632 04:14:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:08.632 04:14:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:08.632 04:14:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:08.632 04:14:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:08.632 04:14:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.632 04:14:22 -- common/autotest_common.sh@10 -- # set +x 00:23:09.201 nvme0n1 00:23:09.201 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.201 04:14:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.201 04:14:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.201 04:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.201 04:14:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.201 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.201 04:14:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.201 04:14:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.201 04:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.201 04:14:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.201 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.201 04:14:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.201 04:14:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:09.201 04:14:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.201 04:14:23 -- host/auth.sh@44 -- # digest=sha256 00:23:09.201 04:14:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.201 04:14:23 -- host/auth.sh@44 -- # keyid=3 00:23:09.201 04:14:23 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:09.201 04:14:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:09.201 04:14:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:09.201 04:14:23 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:09.201 04:14:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:23:09.201 04:14:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.201 04:14:23 -- host/auth.sh@68 -- # digest=sha256 00:23:09.201 04:14:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:09.201 04:14:23 -- host/auth.sh@68 -- # keyid=3 00:23:09.201 04:14:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.201 04:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.201 04:14:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.201 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.201 04:14:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.201 04:14:23 -- nvmf/common.sh@717 -- # local ip 00:23:09.201 04:14:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.201 04:14:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.201 04:14:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.202 04:14:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.202 04:14:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:09.202 04:14:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.202 04:14:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:09.202 04:14:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:09.202 04:14:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:09.202 04:14:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:09.202 04:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.202 04:14:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.461 nvme0n1 00:23:09.461 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.461 04:14:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.461 04:14:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.461 04:14:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.461 04:14:23 -- common/autotest_common.sh@10 -- # set +x 00:23:09.461 04:14:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.720 04:14:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.720 04:14:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.720 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.720 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.720 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.720 04:14:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.720 04:14:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:09.720 04:14:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.720 04:14:24 -- host/auth.sh@44 -- # digest=sha256 00:23:09.720 04:14:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.720 04:14:24 -- host/auth.sh@44 -- # keyid=4 00:23:09.720 04:14:24 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:09.720 04:14:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:09.720 04:14:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:09.720 04:14:24 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:09.720 04:14:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:23:09.720 04:14:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.720 04:14:24 -- host/auth.sh@68 -- # digest=sha256 00:23:09.720 04:14:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:09.720 04:14:24 -- host/auth.sh@68 -- # keyid=4 00:23:09.720 04:14:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.720 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.720 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.720 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.720 04:14:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.720 04:14:24 -- nvmf/common.sh@717 -- # local ip 00:23:09.720 04:14:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.720 04:14:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.720 04:14:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.720 04:14:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.720 04:14:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:09.720 04:14:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.720 04:14:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:09.720 04:14:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:09.720 04:14:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:09.720 04:14:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.720 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.720 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:09.979 nvme0n1 00:23:09.979 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.238 04:14:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.238 04:14:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:10.238 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.238 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.238 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.238 04:14:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.238 04:14:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.238 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.238 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.238 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.238 04:14:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.238 04:14:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.238 04:14:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:10.238 04:14:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.238 04:14:24 -- host/auth.sh@44 -- # digest=sha256 00:23:10.238 04:14:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.238 04:14:24 -- host/auth.sh@44 -- # keyid=0 00:23:10.238 04:14:24 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:10.238 04:14:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:10.238 04:14:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:10.238 04:14:24 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:10.238 04:14:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:23:10.238 04:14:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.238 04:14:24 -- host/auth.sh@68 -- # digest=sha256 00:23:10.238 04:14:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:10.239 04:14:24 -- host/auth.sh@68 -- # keyid=0 00:23:10.239 04:14:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.239 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.239 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.239 04:14:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.239 04:14:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.239 04:14:24 -- nvmf/common.sh@717 -- # local ip 00:23:10.239 04:14:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.239 04:14:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.239 04:14:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.239 04:14:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.239 04:14:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:10.239 04:14:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.239 04:14:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:10.239 04:14:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:10.239 04:14:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:10.239 04:14:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:10.239 04:14:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.239 04:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:11.185 nvme0n1 00:23:11.185 04:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.185 04:14:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.185 04:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.185 04:14:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.185 04:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.185 04:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.185 04:14:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.185 04:14:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.185 04:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.185 04:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.185 04:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.185 04:14:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:11.185 04:14:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:11.185 04:14:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:11.186 04:14:25 -- host/auth.sh@44 -- # digest=sha256 00:23:11.186 04:14:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.186 04:14:25 -- host/auth.sh@44 -- # keyid=1 00:23:11.186 04:14:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:11.186 04:14:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:11.186 04:14:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:11.186 04:14:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:11.186 04:14:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:23:11.186 04:14:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:11.186 04:14:25 -- host/auth.sh@68 -- # digest=sha256 00:23:11.186 04:14:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:11.186 04:14:25 -- host/auth.sh@68 -- # keyid=1 00:23:11.186 04:14:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.186 04:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.186 04:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.186 04:14:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.186 04:14:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:11.186 04:14:25 -- nvmf/common.sh@717 -- # local ip 00:23:11.186 04:14:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.186 04:14:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.186 04:14:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.186 04:14:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.186 04:14:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:11.186 04:14:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.186 04:14:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:11.186 04:14:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:11.186 04:14:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:11.186 04:14:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:11.186 04:14:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.186 04:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:11.753 nvme0n1 00:23:11.753 04:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.753 04:14:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.753 04:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.753 04:14:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.753 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:23:11.753 04:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.753 04:14:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.753 04:14:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.753 04:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.753 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:23:12.013 04:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.013 04:14:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:12.013 04:14:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:12.013 04:14:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:12.013 04:14:26 -- host/auth.sh@44 -- # digest=sha256 00:23:12.013 04:14:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.013 04:14:26 -- host/auth.sh@44 -- # keyid=2 00:23:12.013 04:14:26 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:12.013 04:14:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:12.013 04:14:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:12.013 04:14:26 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:12.013 04:14:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:23:12.013 04:14:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:12.013 04:14:26 -- host/auth.sh@68 -- # digest=sha256 00:23:12.013 04:14:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:12.013 04:14:26 -- host/auth.sh@68 -- # keyid=2 00:23:12.013 04:14:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.013 04:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.013 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:23:12.013 04:14:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.013 04:14:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:12.013 04:14:26 -- nvmf/common.sh@717 -- # local ip 00:23:12.013 04:14:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:12.013 04:14:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:12.013 04:14:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.013 04:14:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.013 04:14:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:12.013 04:14:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.013 04:14:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:12.013 04:14:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:12.013 04:14:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:12.013 04:14:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:12.013 04:14:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.013 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 nvme0n1 00:23:12.581 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.581 04:14:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.581 04:14:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:12.581 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.581 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.581 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.840 04:14:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.840 04:14:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.840 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.840 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.840 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.840 04:14:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:12.840 04:14:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:12.840 04:14:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:12.840 04:14:27 -- host/auth.sh@44 -- # digest=sha256 00:23:12.840 04:14:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.840 04:14:27 -- host/auth.sh@44 -- # keyid=3 00:23:12.840 04:14:27 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:12.840 04:14:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:12.840 04:14:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:12.840 04:14:27 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:12.840 04:14:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:23:12.840 04:14:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:12.840 04:14:27 -- host/auth.sh@68 -- # digest=sha256 00:23:12.840 04:14:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:12.840 04:14:27 -- host/auth.sh@68 -- # keyid=3 00:23:12.840 04:14:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.840 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.840 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:12.840 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.840 04:14:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:12.840 04:14:27 -- nvmf/common.sh@717 -- # local ip 00:23:12.840 04:14:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:12.840 04:14:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:12.840 04:14:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.840 04:14:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.840 04:14:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:12.840 04:14:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.840 04:14:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:12.840 04:14:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:12.840 04:14:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:12.840 04:14:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:12.840 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.840 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:13.408 nvme0n1 00:23:13.408 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.408 04:14:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.408 04:14:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:13.408 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.408 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:13.409 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.668 04:14:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.668 04:14:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.668 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.668 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:13.668 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.668 04:14:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:13.668 04:14:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:13.668 04:14:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:13.668 04:14:27 -- host/auth.sh@44 -- # digest=sha256 00:23:13.668 04:14:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.668 04:14:27 -- host/auth.sh@44 -- # keyid=4 00:23:13.668 04:14:27 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:13.668 04:14:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:13.668 04:14:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:13.668 04:14:27 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:13.668 04:14:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:23:13.668 04:14:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:13.668 04:14:27 -- host/auth.sh@68 -- # digest=sha256 00:23:13.668 04:14:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:13.668 04:14:27 -- host/auth.sh@68 -- # keyid=4 00:23:13.668 04:14:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.668 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.668 04:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:13.668 04:14:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.668 04:14:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:13.668 04:14:27 -- nvmf/common.sh@717 -- # local ip 00:23:13.668 04:14:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:13.668 04:14:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:13.668 04:14:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.668 04:14:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.668 04:14:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:13.668 04:14:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.668 04:14:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:13.668 04:14:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:13.668 04:14:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:13.668 04:14:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.668 04:14:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.668 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.605 nvme0n1 00:23:14.605 04:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.605 04:14:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.605 04:14:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:14.605 04:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.605 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.605 04:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.605 04:14:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.605 04:14:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.606 04:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 04:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:28 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:14.606 04:14:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.606 04:14:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:14.606 04:14:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:14.606 04:14:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:14.606 04:14:28 -- host/auth.sh@44 -- # digest=sha384 00:23:14.606 04:14:28 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.606 04:14:28 -- host/auth.sh@44 -- # keyid=0 00:23:14.606 04:14:28 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:14.606 04:14:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:14.606 04:14:28 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:14.606 04:14:28 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:14.606 04:14:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:23:14.606 04:14:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:14.606 04:14:28 -- host/auth.sh@68 -- # digest=sha384 00:23:14.606 04:14:28 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:14.606 04:14:28 -- host/auth.sh@68 -- # keyid=0 00:23:14.606 04:14:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.606 04:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 04:14:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:14.606 04:14:28 -- nvmf/common.sh@717 -- # local ip 00:23:14.606 04:14:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:14.606 04:14:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:14.606 04:14:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.606 04:14:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.606 04:14:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:14.606 04:14:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.606 04:14:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:14.606 04:14:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:14.606 04:14:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:14.606 04:14:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:14.606 04:14:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 nvme0n1 00:23:14.606 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.606 04:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:14.606 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.606 04:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.606 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:14.606 04:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:14.606 04:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:14.606 04:14:29 -- host/auth.sh@44 -- # digest=sha384 00:23:14.606 04:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.606 04:14:29 -- host/auth.sh@44 -- # keyid=1 00:23:14.606 04:14:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:14.606 04:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:14.606 04:14:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:14.606 04:14:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:14.606 04:14:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:23:14.606 04:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:14.606 04:14:29 -- host/auth.sh@68 -- # digest=sha384 00:23:14.606 04:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:14.606 04:14:29 -- host/auth.sh@68 -- # keyid=1 00:23:14.606 04:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.606 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.606 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.606 04:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:14.606 04:14:29 -- nvmf/common.sh@717 -- # local ip 00:23:14.606 04:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:14.606 04:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:14.606 04:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.606 04:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.606 04:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:14.606 04:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.606 04:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:14.606 04:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:14.606 04:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:14.606 04:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:14.606 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.606 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.865 nvme0n1 00:23:14.865 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.865 04:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.865 04:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:14.865 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.865 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.865 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.865 04:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.865 04:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.865 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.865 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.865 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.865 04:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:14.865 04:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:14.865 04:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:14.865 04:14:29 -- host/auth.sh@44 -- # digest=sha384 00:23:14.865 04:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.865 04:14:29 -- host/auth.sh@44 -- # keyid=2 00:23:14.865 04:14:29 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:14.865 04:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:14.865 04:14:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:14.865 04:14:29 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:14.865 04:14:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:23:14.865 04:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:14.865 04:14:29 -- host/auth.sh@68 -- # digest=sha384 00:23:14.865 04:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:14.865 04:14:29 -- host/auth.sh@68 -- # keyid=2 00:23:14.865 04:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.865 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.865 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:14.865 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.865 04:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:14.865 04:14:29 -- nvmf/common.sh@717 -- # local ip 00:23:14.865 04:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:14.865 04:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:14.865 04:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.865 04:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.865 04:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:14.865 04:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.865 04:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:14.865 04:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:14.865 04:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:14.865 04:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:14.865 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.865 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.125 nvme0n1 00:23:15.125 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.125 04:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.125 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.125 04:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.125 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.125 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.125 04:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.125 04:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.125 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.125 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.125 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.125 04:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.125 04:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:15.125 04:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.125 04:14:29 -- host/auth.sh@44 -- # digest=sha384 00:23:15.125 04:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.125 04:14:29 -- host/auth.sh@44 -- # keyid=3 00:23:15.125 04:14:29 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:15.125 04:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:15.125 04:14:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:15.125 04:14:29 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:15.125 04:14:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:23:15.125 04:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.125 04:14:29 -- host/auth.sh@68 -- # digest=sha384 00:23:15.125 04:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:15.125 04:14:29 -- host/auth.sh@68 -- # keyid=3 00:23:15.125 04:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.125 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.125 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.125 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.125 04:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.125 04:14:29 -- nvmf/common.sh@717 -- # local ip 00:23:15.125 04:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.125 04:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.125 04:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.125 04:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.125 04:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:15.125 04:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.125 04:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:15.125 04:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:15.125 04:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:15.125 04:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:15.125 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.125 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.385 nvme0n1 00:23:15.385 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.385 04:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.385 04:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.385 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.385 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.385 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.385 04:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.385 04:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.385 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.385 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.385 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.385 04:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.385 04:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:15.385 04:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.385 04:14:29 -- host/auth.sh@44 -- # digest=sha384 00:23:15.385 04:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.385 04:14:29 -- host/auth.sh@44 -- # keyid=4 00:23:15.385 04:14:29 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:15.385 04:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:15.385 04:14:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:15.385 04:14:29 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:15.385 04:14:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:23:15.385 04:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.385 04:14:29 -- host/auth.sh@68 -- # digest=sha384 00:23:15.385 04:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:15.385 04:14:29 -- host/auth.sh@68 -- # keyid=4 00:23:15.385 04:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.385 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.385 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.385 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.385 04:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.385 04:14:29 -- nvmf/common.sh@717 -- # local ip 00:23:15.385 04:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.385 04:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.385 04:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.385 04:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.385 04:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:15.385 04:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.385 04:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:15.385 04:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:15.385 04:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:15.385 04:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.385 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.385 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.644 nvme0n1 00:23:15.644 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.644 04:14:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.644 04:14:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.644 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.644 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.644 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.644 04:14:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.644 04:14:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.644 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.644 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.644 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.644 04:14:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.644 04:14:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.644 04:14:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:15.644 04:14:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.644 04:14:29 -- host/auth.sh@44 -- # digest=sha384 00:23:15.644 04:14:29 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.644 04:14:29 -- host/auth.sh@44 -- # keyid=0 00:23:15.644 04:14:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:15.644 04:14:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:15.644 04:14:29 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:15.644 04:14:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:15.644 04:14:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:23:15.644 04:14:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.644 04:14:29 -- host/auth.sh@68 -- # digest=sha384 00:23:15.644 04:14:29 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:15.644 04:14:29 -- host/auth.sh@68 -- # keyid=0 00:23:15.644 04:14:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.644 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.644 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.644 04:14:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.644 04:14:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.644 04:14:29 -- nvmf/common.sh@717 -- # local ip 00:23:15.644 04:14:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.644 04:14:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.645 04:14:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.645 04:14:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.645 04:14:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:15.645 04:14:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.645 04:14:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:15.645 04:14:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:15.645 04:14:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:15.645 04:14:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:15.645 04:14:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.645 04:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:15.904 nvme0n1 00:23:15.904 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.904 04:14:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.904 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.904 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.904 04:14:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.904 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.904 04:14:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.904 04:14:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.904 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.904 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.904 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.904 04:14:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.904 04:14:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:15.904 04:14:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.904 04:14:30 -- host/auth.sh@44 -- # digest=sha384 00:23:15.904 04:14:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.904 04:14:30 -- host/auth.sh@44 -- # keyid=1 00:23:15.904 04:14:30 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:15.904 04:14:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:15.904 04:14:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:15.904 04:14:30 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:15.904 04:14:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:23:15.904 04:14:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.904 04:14:30 -- host/auth.sh@68 -- # digest=sha384 00:23:15.904 04:14:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:15.904 04:14:30 -- host/auth.sh@68 -- # keyid=1 00:23:15.904 04:14:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.904 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.904 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:15.904 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.904 04:14:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.904 04:14:30 -- nvmf/common.sh@717 -- # local ip 00:23:15.904 04:14:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.904 04:14:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.904 04:14:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.904 04:14:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.904 04:14:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:15.904 04:14:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.904 04:14:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:15.904 04:14:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:15.904 04:14:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:15.904 04:14:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:15.904 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.904 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.163 nvme0n1 00:23:16.163 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.163 04:14:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.163 04:14:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:16.163 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.163 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.163 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.163 04:14:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.163 04:14:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.163 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.163 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.163 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.163 04:14:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:16.163 04:14:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:16.163 04:14:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:16.163 04:14:30 -- host/auth.sh@44 -- # digest=sha384 00:23:16.163 04:14:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.163 04:14:30 -- host/auth.sh@44 -- # keyid=2 00:23:16.163 04:14:30 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:16.163 04:14:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:16.163 04:14:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:16.163 04:14:30 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:16.163 04:14:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:23:16.163 04:14:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:16.163 04:14:30 -- host/auth.sh@68 -- # digest=sha384 00:23:16.163 04:14:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:16.163 04:14:30 -- host/auth.sh@68 -- # keyid=2 00:23:16.163 04:14:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.163 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.163 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.163 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.163 04:14:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:16.163 04:14:30 -- nvmf/common.sh@717 -- # local ip 00:23:16.163 04:14:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:16.163 04:14:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:16.163 04:14:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.163 04:14:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.163 04:14:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:16.163 04:14:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.163 04:14:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:16.163 04:14:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:16.163 04:14:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:16.163 04:14:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:16.163 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.163 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.422 nvme0n1 00:23:16.422 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.422 04:14:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.422 04:14:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:16.422 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.422 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.422 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.422 04:14:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.422 04:14:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.422 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.422 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.422 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.422 04:14:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:16.422 04:14:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:16.422 04:14:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:16.422 04:14:30 -- host/auth.sh@44 -- # digest=sha384 00:23:16.422 04:14:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.422 04:14:30 -- host/auth.sh@44 -- # keyid=3 00:23:16.422 04:14:30 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:16.422 04:14:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:16.422 04:14:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:16.422 04:14:30 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:16.422 04:14:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:23:16.422 04:14:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:16.422 04:14:30 -- host/auth.sh@68 -- # digest=sha384 00:23:16.422 04:14:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:16.422 04:14:30 -- host/auth.sh@68 -- # keyid=3 00:23:16.422 04:14:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.422 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.422 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.422 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.422 04:14:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:16.422 04:14:30 -- nvmf/common.sh@717 -- # local ip 00:23:16.422 04:14:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:16.422 04:14:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:16.422 04:14:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.422 04:14:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.422 04:14:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:16.422 04:14:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.423 04:14:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:16.423 04:14:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:16.423 04:14:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:16.423 04:14:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:16.423 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.423 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.682 nvme0n1 00:23:16.682 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.682 04:14:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.682 04:14:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:16.682 04:14:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.682 04:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:16.682 04:14:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.682 04:14:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.682 04:14:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.682 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.682 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.682 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.682 04:14:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:16.682 04:14:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:16.682 04:14:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:16.682 04:14:31 -- host/auth.sh@44 -- # digest=sha384 00:23:16.682 04:14:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.682 04:14:31 -- host/auth.sh@44 -- # keyid=4 00:23:16.682 04:14:31 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:16.682 04:14:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:16.682 04:14:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:16.682 04:14:31 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:16.682 04:14:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:23:16.682 04:14:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:16.682 04:14:31 -- host/auth.sh@68 -- # digest=sha384 00:23:16.682 04:14:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:16.682 04:14:31 -- host/auth.sh@68 -- # keyid=4 00:23:16.682 04:14:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.682 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.682 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.682 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.682 04:14:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:16.682 04:14:31 -- nvmf/common.sh@717 -- # local ip 00:23:16.682 04:14:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:16.682 04:14:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:16.682 04:14:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.682 04:14:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.682 04:14:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:16.682 04:14:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.682 04:14:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:16.682 04:14:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:16.682 04:14:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:16.682 04:14:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.682 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.682 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.941 nvme0n1 00:23:16.941 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.941 04:14:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.941 04:14:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:16.941 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.941 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.941 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.941 04:14:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.941 04:14:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.941 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.941 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.941 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.941 04:14:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.941 04:14:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:16.941 04:14:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:16.941 04:14:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:16.941 04:14:31 -- host/auth.sh@44 -- # digest=sha384 00:23:16.941 04:14:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.941 04:14:31 -- host/auth.sh@44 -- # keyid=0 00:23:16.941 04:14:31 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:16.941 04:14:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:16.941 04:14:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:16.941 04:14:31 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:16.941 04:14:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:23:16.941 04:14:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:16.941 04:14:31 -- host/auth.sh@68 -- # digest=sha384 00:23:16.941 04:14:31 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:16.941 04:14:31 -- host/auth.sh@68 -- # keyid=0 00:23:16.941 04:14:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.941 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.941 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:16.941 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.941 04:14:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:16.941 04:14:31 -- nvmf/common.sh@717 -- # local ip 00:23:16.941 04:14:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:16.941 04:14:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:16.941 04:14:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.941 04:14:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.941 04:14:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:16.941 04:14:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.941 04:14:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:16.941 04:14:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:16.941 04:14:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:16.941 04:14:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:16.941 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.941 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.200 nvme0n1 00:23:17.200 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.200 04:14:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.200 04:14:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:17.200 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.200 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.200 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.200 04:14:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.200 04:14:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.200 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.200 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.200 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.200 04:14:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:17.200 04:14:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:17.200 04:14:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:17.200 04:14:31 -- host/auth.sh@44 -- # digest=sha384 00:23:17.200 04:14:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.200 04:14:31 -- host/auth.sh@44 -- # keyid=1 00:23:17.200 04:14:31 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:17.200 04:14:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:17.200 04:14:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:17.200 04:14:31 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:17.200 04:14:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:23:17.200 04:14:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:17.200 04:14:31 -- host/auth.sh@68 -- # digest=sha384 00:23:17.200 04:14:31 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:17.200 04:14:31 -- host/auth.sh@68 -- # keyid=1 00:23:17.200 04:14:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.200 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.200 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.200 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.200 04:14:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:17.200 04:14:31 -- nvmf/common.sh@717 -- # local ip 00:23:17.200 04:14:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.200 04:14:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.200 04:14:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.200 04:14:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.200 04:14:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:17.200 04:14:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.200 04:14:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:17.200 04:14:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:17.200 04:14:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:17.200 04:14:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:17.200 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.200 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.459 nvme0n1 00:23:17.459 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.459 04:14:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.459 04:14:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:17.459 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.459 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.459 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.459 04:14:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.459 04:14:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.459 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.459 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.718 04:14:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.718 04:14:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:17.718 04:14:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:17.718 04:14:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:17.718 04:14:31 -- host/auth.sh@44 -- # digest=sha384 00:23:17.718 04:14:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.718 04:14:31 -- host/auth.sh@44 -- # keyid=2 00:23:17.718 04:14:31 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:17.718 04:14:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:17.718 04:14:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:17.718 04:14:31 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:17.718 04:14:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:23:17.718 04:14:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:17.718 04:14:31 -- host/auth.sh@68 -- # digest=sha384 00:23:17.718 04:14:31 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:17.718 04:14:31 -- host/auth.sh@68 -- # keyid=2 00:23:17.718 04:14:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.718 04:14:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.718 04:14:31 -- common/autotest_common.sh@10 -- # set +x 00:23:17.718 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.718 04:14:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:17.718 04:14:32 -- nvmf/common.sh@717 -- # local ip 00:23:17.718 04:14:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.718 04:14:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.718 04:14:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.718 04:14:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.718 04:14:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:17.718 04:14:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.718 04:14:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:17.718 04:14:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:17.718 04:14:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:17.718 04:14:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.718 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.718 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:17.978 nvme0n1 00:23:17.978 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.978 04:14:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.978 04:14:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:17.978 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.978 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:17.978 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.978 04:14:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.978 04:14:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.978 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.978 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:17.978 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.978 04:14:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:17.978 04:14:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:17.978 04:14:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:17.978 04:14:32 -- host/auth.sh@44 -- # digest=sha384 00:23:17.978 04:14:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.978 04:14:32 -- host/auth.sh@44 -- # keyid=3 00:23:17.978 04:14:32 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:17.978 04:14:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:17.978 04:14:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:17.978 04:14:32 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:17.978 04:14:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:23:17.978 04:14:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:17.978 04:14:32 -- host/auth.sh@68 -- # digest=sha384 00:23:17.978 04:14:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:17.978 04:14:32 -- host/auth.sh@68 -- # keyid=3 00:23:17.978 04:14:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.978 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.978 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:17.978 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.978 04:14:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:17.978 04:14:32 -- nvmf/common.sh@717 -- # local ip 00:23:17.978 04:14:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.978 04:14:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.978 04:14:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.978 04:14:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.978 04:14:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:17.978 04:14:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.978 04:14:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:17.978 04:14:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:17.978 04:14:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:17.978 04:14:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:17.978 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.978 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.238 nvme0n1 00:23:18.238 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.238 04:14:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.238 04:14:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:18.238 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.238 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.238 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.238 04:14:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.238 04:14:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.238 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.238 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.238 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.238 04:14:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:18.238 04:14:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:18.238 04:14:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:18.238 04:14:32 -- host/auth.sh@44 -- # digest=sha384 00:23:18.238 04:14:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.238 04:14:32 -- host/auth.sh@44 -- # keyid=4 00:23:18.238 04:14:32 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:18.238 04:14:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:18.238 04:14:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:18.238 04:14:32 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:18.238 04:14:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:23:18.238 04:14:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:18.238 04:14:32 -- host/auth.sh@68 -- # digest=sha384 00:23:18.238 04:14:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:18.238 04:14:32 -- host/auth.sh@68 -- # keyid=4 00:23:18.238 04:14:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.238 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.238 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.238 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.238 04:14:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:18.238 04:14:32 -- nvmf/common.sh@717 -- # local ip 00:23:18.238 04:14:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:18.238 04:14:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:18.238 04:14:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.238 04:14:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.238 04:14:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:18.238 04:14:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.238 04:14:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:18.238 04:14:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:18.238 04:14:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:18.238 04:14:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.238 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.238 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.497 nvme0n1 00:23:18.497 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.497 04:14:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:18.497 04:14:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.497 04:14:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.497 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:23:18.497 04:14:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.497 04:14:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.497 04:14:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.497 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.497 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:18.756 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.756 04:14:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.756 04:14:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:18.756 04:14:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:18.756 04:14:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:18.756 04:14:33 -- host/auth.sh@44 -- # digest=sha384 00:23:18.756 04:14:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.756 04:14:33 -- host/auth.sh@44 -- # keyid=0 00:23:18.756 04:14:33 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:18.756 04:14:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:18.756 04:14:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:18.756 04:14:33 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:18.756 04:14:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:23:18.756 04:14:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:18.756 04:14:33 -- host/auth.sh@68 -- # digest=sha384 00:23:18.756 04:14:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:18.756 04:14:33 -- host/auth.sh@68 -- # keyid=0 00:23:18.756 04:14:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:18.756 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.756 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:18.756 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.756 04:14:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:18.756 04:14:33 -- nvmf/common.sh@717 -- # local ip 00:23:18.756 04:14:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:18.756 04:14:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:18.756 04:14:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.756 04:14:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.756 04:14:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:18.756 04:14:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.756 04:14:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:18.756 04:14:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:18.756 04:14:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:18.756 04:14:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:18.756 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.756 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.015 nvme0n1 00:23:19.015 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.015 04:14:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.015 04:14:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:19.015 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.015 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.015 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.273 04:14:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.273 04:14:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.273 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.273 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.273 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.273 04:14:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:19.273 04:14:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:19.273 04:14:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:19.273 04:14:33 -- host/auth.sh@44 -- # digest=sha384 00:23:19.273 04:14:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.273 04:14:33 -- host/auth.sh@44 -- # keyid=1 00:23:19.273 04:14:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:19.273 04:14:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:19.274 04:14:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:19.274 04:14:33 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:19.274 04:14:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:23:19.274 04:14:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:19.274 04:14:33 -- host/auth.sh@68 -- # digest=sha384 00:23:19.274 04:14:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:19.274 04:14:33 -- host/auth.sh@68 -- # keyid=1 00:23:19.274 04:14:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:19.274 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.274 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.274 04:14:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.274 04:14:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:19.274 04:14:33 -- nvmf/common.sh@717 -- # local ip 00:23:19.274 04:14:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:19.274 04:14:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:19.274 04:14:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.274 04:14:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.274 04:14:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:19.274 04:14:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.274 04:14:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:19.274 04:14:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:19.274 04:14:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:19.274 04:14:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:19.274 04:14:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.274 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:19.581 nvme0n1 00:23:19.581 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.581 04:14:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.581 04:14:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:19.581 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.581 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:19.581 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.851 04:14:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.851 04:14:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.851 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.851 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:19.851 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.851 04:14:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:19.851 04:14:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:19.851 04:14:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:19.851 04:14:34 -- host/auth.sh@44 -- # digest=sha384 00:23:19.851 04:14:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.851 04:14:34 -- host/auth.sh@44 -- # keyid=2 00:23:19.851 04:14:34 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:19.851 04:14:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:19.851 04:14:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:19.851 04:14:34 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:19.851 04:14:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:23:19.851 04:14:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:19.851 04:14:34 -- host/auth.sh@68 -- # digest=sha384 00:23:19.851 04:14:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:19.851 04:14:34 -- host/auth.sh@68 -- # keyid=2 00:23:19.851 04:14:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:19.851 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.851 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:19.851 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.851 04:14:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:19.851 04:14:34 -- nvmf/common.sh@717 -- # local ip 00:23:19.851 04:14:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:19.851 04:14:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:19.851 04:14:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.851 04:14:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.851 04:14:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:19.851 04:14:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.851 04:14:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:19.851 04:14:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:19.851 04:14:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:19.851 04:14:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:19.851 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.851 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:20.110 nvme0n1 00:23:20.110 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.110 04:14:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.110 04:14:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:20.110 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.110 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:20.110 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.370 04:14:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.370 04:14:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.370 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.370 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:20.370 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.370 04:14:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:20.370 04:14:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:20.370 04:14:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:20.370 04:14:34 -- host/auth.sh@44 -- # digest=sha384 00:23:20.370 04:14:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.370 04:14:34 -- host/auth.sh@44 -- # keyid=3 00:23:20.370 04:14:34 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:20.370 04:14:34 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:20.370 04:14:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:20.370 04:14:34 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:20.370 04:14:34 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:23:20.370 04:14:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:20.370 04:14:34 -- host/auth.sh@68 -- # digest=sha384 00:23:20.370 04:14:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:20.370 04:14:34 -- host/auth.sh@68 -- # keyid=3 00:23:20.370 04:14:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.370 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.370 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:20.370 04:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.370 04:14:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:20.370 04:14:34 -- nvmf/common.sh@717 -- # local ip 00:23:20.370 04:14:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:20.370 04:14:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:20.370 04:14:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.370 04:14:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.370 04:14:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:20.370 04:14:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.370 04:14:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:20.370 04:14:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:20.370 04:14:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:20.370 04:14:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:20.370 04:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.370 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:20.937 nvme0n1 00:23:20.937 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.937 04:14:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.937 04:14:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:20.937 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.937 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:20.937 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.937 04:14:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.937 04:14:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.938 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.938 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:20.938 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.938 04:14:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:20.938 04:14:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:20.938 04:14:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:20.938 04:14:35 -- host/auth.sh@44 -- # digest=sha384 00:23:20.938 04:14:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.938 04:14:35 -- host/auth.sh@44 -- # keyid=4 00:23:20.938 04:14:35 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:20.938 04:14:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:20.938 04:14:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:20.938 04:14:35 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:20.938 04:14:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:23:20.938 04:14:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:20.938 04:14:35 -- host/auth.sh@68 -- # digest=sha384 00:23:20.938 04:14:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:20.938 04:14:35 -- host/auth.sh@68 -- # keyid=4 00:23:20.938 04:14:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.938 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.938 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:20.938 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.938 04:14:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:20.938 04:14:35 -- nvmf/common.sh@717 -- # local ip 00:23:20.938 04:14:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:20.938 04:14:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:20.938 04:14:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.938 04:14:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.938 04:14:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:20.938 04:14:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.938 04:14:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:20.938 04:14:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:20.938 04:14:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:20.938 04:14:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.938 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.938 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.197 nvme0n1 00:23:21.197 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.197 04:14:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.197 04:14:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:21.197 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.197 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.455 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.455 04:14:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.455 04:14:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.455 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.455 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.455 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.455 04:14:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.455 04:14:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:21.455 04:14:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:21.455 04:14:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:21.455 04:14:35 -- host/auth.sh@44 -- # digest=sha384 00:23:21.455 04:14:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.455 04:14:35 -- host/auth.sh@44 -- # keyid=0 00:23:21.455 04:14:35 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:21.455 04:14:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:21.455 04:14:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:21.455 04:14:35 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:21.455 04:14:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:23:21.455 04:14:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:21.455 04:14:35 -- host/auth.sh@68 -- # digest=sha384 00:23:21.455 04:14:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:21.455 04:14:35 -- host/auth.sh@68 -- # keyid=0 00:23:21.455 04:14:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:21.455 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.455 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:21.455 04:14:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.455 04:14:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:21.455 04:14:35 -- nvmf/common.sh@717 -- # local ip 00:23:21.455 04:14:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:21.455 04:14:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:21.455 04:14:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.455 04:14:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.455 04:14:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:21.455 04:14:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.455 04:14:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:21.455 04:14:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:21.455 04:14:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:21.455 04:14:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:21.456 04:14:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.456 04:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:22.389 nvme0n1 00:23:22.389 04:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.389 04:14:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.389 04:14:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:22.389 04:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.389 04:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.389 04:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.389 04:14:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.389 04:14:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.389 04:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.389 04:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.389 04:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.389 04:14:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:22.389 04:14:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:22.389 04:14:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:22.389 04:14:36 -- host/auth.sh@44 -- # digest=sha384 00:23:22.389 04:14:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.389 04:14:36 -- host/auth.sh@44 -- # keyid=1 00:23:22.390 04:14:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:22.390 04:14:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:22.390 04:14:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:22.390 04:14:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:22.390 04:14:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:23:22.390 04:14:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:22.390 04:14:36 -- host/auth.sh@68 -- # digest=sha384 00:23:22.390 04:14:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:22.390 04:14:36 -- host/auth.sh@68 -- # keyid=1 00:23:22.390 04:14:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.390 04:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.390 04:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.390 04:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.390 04:14:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:22.390 04:14:36 -- nvmf/common.sh@717 -- # local ip 00:23:22.390 04:14:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:22.390 04:14:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:22.390 04:14:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.390 04:14:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.390 04:14:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:22.390 04:14:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.390 04:14:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:22.390 04:14:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:22.390 04:14:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:22.390 04:14:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:22.390 04:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.390 04:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:22.956 nvme0n1 00:23:22.956 04:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.956 04:14:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.956 04:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.956 04:14:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:22.956 04:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.956 04:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.956 04:14:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.956 04:14:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.956 04:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.956 04:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.956 04:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.956 04:14:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:22.956 04:14:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:22.956 04:14:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:22.956 04:14:37 -- host/auth.sh@44 -- # digest=sha384 00:23:22.956 04:14:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.956 04:14:37 -- host/auth.sh@44 -- # keyid=2 00:23:22.956 04:14:37 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:22.956 04:14:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:22.956 04:14:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:22.956 04:14:37 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:22.956 04:14:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:23:22.956 04:14:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:22.956 04:14:37 -- host/auth.sh@68 -- # digest=sha384 00:23:22.956 04:14:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:22.956 04:14:37 -- host/auth.sh@68 -- # keyid=2 00:23:22.956 04:14:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.956 04:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.956 04:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.956 04:14:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.956 04:14:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:22.956 04:14:37 -- nvmf/common.sh@717 -- # local ip 00:23:22.956 04:14:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:22.956 04:14:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:22.956 04:14:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.956 04:14:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.956 04:14:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:22.956 04:14:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.956 04:14:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:22.956 04:14:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:22.956 04:14:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:22.956 04:14:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:22.956 04:14:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.956 04:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:23.891 nvme0n1 00:23:23.891 04:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.891 04:14:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.891 04:14:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:23.891 04:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.891 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:23.891 04:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.891 04:14:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.891 04:14:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.891 04:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.891 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:23.891 04:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.891 04:14:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:23.891 04:14:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:23.891 04:14:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:23.891 04:14:38 -- host/auth.sh@44 -- # digest=sha384 00:23:23.891 04:14:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.891 04:14:38 -- host/auth.sh@44 -- # keyid=3 00:23:23.891 04:14:38 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:23.891 04:14:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:23.891 04:14:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:23.891 04:14:38 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:23.891 04:14:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:23:23.891 04:14:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:23.891 04:14:38 -- host/auth.sh@68 -- # digest=sha384 00:23:23.891 04:14:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:23.891 04:14:38 -- host/auth.sh@68 -- # keyid=3 00:23:23.891 04:14:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:23.891 04:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.891 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:23.891 04:14:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.891 04:14:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:23.891 04:14:38 -- nvmf/common.sh@717 -- # local ip 00:23:23.891 04:14:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:23.891 04:14:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:23.891 04:14:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.891 04:14:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.891 04:14:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:23.891 04:14:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.891 04:14:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:23.891 04:14:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:23.891 04:14:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:23.891 04:14:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:23.891 04:14:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.891 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:24.825 nvme0n1 00:23:24.825 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.825 04:14:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:24.825 04:14:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.825 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.825 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:24.825 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.825 04:14:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.825 04:14:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.825 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.825 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:24.825 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.825 04:14:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:24.825 04:14:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:24.825 04:14:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:24.826 04:14:39 -- host/auth.sh@44 -- # digest=sha384 00:23:24.826 04:14:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.826 04:14:39 -- host/auth.sh@44 -- # keyid=4 00:23:24.826 04:14:39 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:24.826 04:14:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:24.826 04:14:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:24.826 04:14:39 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:24.826 04:14:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:23:24.826 04:14:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:24.826 04:14:39 -- host/auth.sh@68 -- # digest=sha384 00:23:24.826 04:14:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:24.826 04:14:39 -- host/auth.sh@68 -- # keyid=4 00:23:24.826 04:14:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.826 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.826 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:24.826 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.826 04:14:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:24.826 04:14:39 -- nvmf/common.sh@717 -- # local ip 00:23:24.826 04:14:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:24.826 04:14:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:24.826 04:14:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.826 04:14:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.826 04:14:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:24.826 04:14:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.826 04:14:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:24.826 04:14:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:24.826 04:14:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:24.826 04:14:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.826 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.826 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:25.760 nvme0n1 00:23:25.760 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.760 04:14:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.760 04:14:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:25.760 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.760 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:25.760 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.761 04:14:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.761 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 04:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:39 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:25.761 04:14:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.761 04:14:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:25.761 04:14:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:25.761 04:14:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:25.761 04:14:39 -- host/auth.sh@44 -- # digest=sha512 00:23:25.761 04:14:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.761 04:14:39 -- host/auth.sh@44 -- # keyid=0 00:23:25.761 04:14:39 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:25.761 04:14:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:25.761 04:14:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:25.761 04:14:39 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:25.761 04:14:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:23:25.761 04:14:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:25.761 04:14:39 -- host/auth.sh@68 -- # digest=sha512 00:23:25.761 04:14:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:25.761 04:14:39 -- host/auth.sh@68 -- # keyid=0 00:23:25.761 04:14:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.761 04:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:25.761 04:14:40 -- nvmf/common.sh@717 -- # local ip 00:23:25.761 04:14:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:25.761 04:14:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:25.761 04:14:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:25.761 04:14:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:25.761 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 nvme0n1 00:23:25.761 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.761 04:14:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:25.761 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.761 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:25.761 04:14:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:25.761 04:14:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:25.761 04:14:40 -- host/auth.sh@44 -- # digest=sha512 00:23:25.761 04:14:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.761 04:14:40 -- host/auth.sh@44 -- # keyid=1 00:23:25.761 04:14:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:25.761 04:14:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:25.761 04:14:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:25.761 04:14:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:25.761 04:14:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:23:25.761 04:14:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:25.761 04:14:40 -- host/auth.sh@68 -- # digest=sha512 00:23:25.761 04:14:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:25.761 04:14:40 -- host/auth.sh@68 -- # keyid=1 00:23:25.761 04:14:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.761 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:25.761 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.761 04:14:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:25.761 04:14:40 -- nvmf/common.sh@717 -- # local ip 00:23:25.761 04:14:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:25.761 04:14:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:25.761 04:14:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:25.761 04:14:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:25.761 04:14:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:25.761 04:14:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:25.761 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.761 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.019 nvme0n1 00:23:26.019 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.019 04:14:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.019 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.019 04:14:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:26.019 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.019 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.019 04:14:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.019 04:14:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.019 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.019 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.019 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.019 04:14:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:26.019 04:14:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:26.019 04:14:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:26.019 04:14:40 -- host/auth.sh@44 -- # digest=sha512 00:23:26.019 04:14:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.019 04:14:40 -- host/auth.sh@44 -- # keyid=2 00:23:26.019 04:14:40 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:26.019 04:14:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:26.019 04:14:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:26.019 04:14:40 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:26.019 04:14:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:23:26.019 04:14:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:26.019 04:14:40 -- host/auth.sh@68 -- # digest=sha512 00:23:26.019 04:14:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:26.019 04:14:40 -- host/auth.sh@68 -- # keyid=2 00:23:26.019 04:14:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.019 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.019 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.019 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.019 04:14:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:26.019 04:14:40 -- nvmf/common.sh@717 -- # local ip 00:23:26.019 04:14:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:26.019 04:14:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:26.019 04:14:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.019 04:14:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.019 04:14:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:26.019 04:14:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.019 04:14:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:26.019 04:14:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:26.019 04:14:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:26.019 04:14:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:26.019 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.019 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.277 nvme0n1 00:23:26.277 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.277 04:14:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.277 04:14:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:26.277 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.277 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.277 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.277 04:14:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.277 04:14:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.277 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.277 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.277 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.277 04:14:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:26.277 04:14:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:26.277 04:14:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:26.277 04:14:40 -- host/auth.sh@44 -- # digest=sha512 00:23:26.277 04:14:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.277 04:14:40 -- host/auth.sh@44 -- # keyid=3 00:23:26.277 04:14:40 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:26.277 04:14:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:26.277 04:14:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:26.277 04:14:40 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:26.277 04:14:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:23:26.277 04:14:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:26.277 04:14:40 -- host/auth.sh@68 -- # digest=sha512 00:23:26.277 04:14:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:26.277 04:14:40 -- host/auth.sh@68 -- # keyid=3 00:23:26.277 04:14:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.277 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.277 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.277 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.277 04:14:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:26.277 04:14:40 -- nvmf/common.sh@717 -- # local ip 00:23:26.277 04:14:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:26.277 04:14:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:26.277 04:14:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.277 04:14:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.277 04:14:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:26.277 04:14:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.277 04:14:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:26.277 04:14:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:26.277 04:14:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:26.277 04:14:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:26.277 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.277 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 nvme0n1 00:23:26.535 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.535 04:14:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.535 04:14:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:26.535 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.535 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.535 04:14:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.535 04:14:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.535 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.535 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.535 04:14:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:26.535 04:14:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:26.535 04:14:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:26.536 04:14:40 -- host/auth.sh@44 -- # digest=sha512 00:23:26.536 04:14:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.536 04:14:40 -- host/auth.sh@44 -- # keyid=4 00:23:26.536 04:14:40 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:26.536 04:14:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:26.536 04:14:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:26.536 04:14:40 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:26.536 04:14:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:23:26.536 04:14:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:26.536 04:14:40 -- host/auth.sh@68 -- # digest=sha512 00:23:26.536 04:14:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:26.536 04:14:40 -- host/auth.sh@68 -- # keyid=4 00:23:26.536 04:14:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.536 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.536 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.536 04:14:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.536 04:14:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:26.536 04:14:40 -- nvmf/common.sh@717 -- # local ip 00:23:26.536 04:14:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:26.536 04:14:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:26.536 04:14:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.536 04:14:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.536 04:14:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:26.536 04:14:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.536 04:14:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:26.536 04:14:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:26.536 04:14:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:26.536 04:14:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.536 04:14:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.536 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:23:26.793 nvme0n1 00:23:26.793 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.793 04:14:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.793 04:14:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:26.793 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.793 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:26.793 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.793 04:14:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.793 04:14:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.793 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.793 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:26.793 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.793 04:14:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.793 04:14:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:26.793 04:14:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:26.793 04:14:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:26.793 04:14:41 -- host/auth.sh@44 -- # digest=sha512 00:23:26.793 04:14:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.793 04:14:41 -- host/auth.sh@44 -- # keyid=0 00:23:26.793 04:14:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:26.793 04:14:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:26.793 04:14:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:26.793 04:14:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:26.793 04:14:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:23:26.793 04:14:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:26.793 04:14:41 -- host/auth.sh@68 -- # digest=sha512 00:23:26.793 04:14:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:26.793 04:14:41 -- host/auth.sh@68 -- # keyid=0 00:23:26.794 04:14:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.794 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.794 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:26.794 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.794 04:14:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:26.794 04:14:41 -- nvmf/common.sh@717 -- # local ip 00:23:26.794 04:14:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:26.794 04:14:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:26.794 04:14:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.794 04:14:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.794 04:14:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:26.794 04:14:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.794 04:14:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:26.794 04:14:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:26.794 04:14:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:26.794 04:14:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:26.794 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.794 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 nvme0n1 00:23:27.051 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.051 04:14:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.051 04:14:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:27.051 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.051 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.051 04:14:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.051 04:14:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.051 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.051 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.051 04:14:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:27.051 04:14:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:27.051 04:14:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:27.051 04:14:41 -- host/auth.sh@44 -- # digest=sha512 00:23:27.051 04:14:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.051 04:14:41 -- host/auth.sh@44 -- # keyid=1 00:23:27.051 04:14:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:27.051 04:14:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:27.051 04:14:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:27.051 04:14:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:27.051 04:14:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:23:27.051 04:14:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:27.051 04:14:41 -- host/auth.sh@68 -- # digest=sha512 00:23:27.051 04:14:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:27.051 04:14:41 -- host/auth.sh@68 -- # keyid=1 00:23:27.051 04:14:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.051 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.051 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.051 04:14:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:27.051 04:14:41 -- nvmf/common.sh@717 -- # local ip 00:23:27.051 04:14:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:27.051 04:14:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:27.051 04:14:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.051 04:14:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.051 04:14:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:27.051 04:14:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.051 04:14:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:27.051 04:14:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:27.051 04:14:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:27.051 04:14:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:27.051 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.051 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.308 nvme0n1 00:23:27.308 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.308 04:14:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.308 04:14:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:27.308 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.308 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.308 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.308 04:14:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.308 04:14:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.308 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.308 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.308 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.308 04:14:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:27.308 04:14:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:27.308 04:14:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:27.308 04:14:41 -- host/auth.sh@44 -- # digest=sha512 00:23:27.308 04:14:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.308 04:14:41 -- host/auth.sh@44 -- # keyid=2 00:23:27.308 04:14:41 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:27.308 04:14:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:27.308 04:14:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:27.308 04:14:41 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:27.308 04:14:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:23:27.308 04:14:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:27.308 04:14:41 -- host/auth.sh@68 -- # digest=sha512 00:23:27.308 04:14:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:27.308 04:14:41 -- host/auth.sh@68 -- # keyid=2 00:23:27.308 04:14:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.308 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.308 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.308 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.308 04:14:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:27.308 04:14:41 -- nvmf/common.sh@717 -- # local ip 00:23:27.308 04:14:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:27.308 04:14:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:27.308 04:14:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.308 04:14:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.308 04:14:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:27.308 04:14:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.308 04:14:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:27.308 04:14:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:27.308 04:14:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:27.308 04:14:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:27.308 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.308 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.566 nvme0n1 00:23:27.566 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.566 04:14:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.566 04:14:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:27.566 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.566 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.566 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.566 04:14:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.566 04:14:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.566 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.566 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.566 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.566 04:14:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:27.566 04:14:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:27.566 04:14:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:27.566 04:14:41 -- host/auth.sh@44 -- # digest=sha512 00:23:27.566 04:14:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.566 04:14:41 -- host/auth.sh@44 -- # keyid=3 00:23:27.566 04:14:41 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:27.566 04:14:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:27.566 04:14:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:27.566 04:14:41 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:27.566 04:14:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:23:27.566 04:14:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:27.566 04:14:41 -- host/auth.sh@68 -- # digest=sha512 00:23:27.566 04:14:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:27.566 04:14:41 -- host/auth.sh@68 -- # keyid=3 00:23:27.566 04:14:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.566 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.566 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.566 04:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.566 04:14:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:27.566 04:14:41 -- nvmf/common.sh@717 -- # local ip 00:23:27.566 04:14:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:27.566 04:14:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:27.566 04:14:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.566 04:14:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.566 04:14:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:27.566 04:14:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.566 04:14:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:27.566 04:14:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:27.566 04:14:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:27.566 04:14:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:27.566 04:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.566 04:14:41 -- common/autotest_common.sh@10 -- # set +x 00:23:27.823 nvme0n1 00:23:27.823 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.823 04:14:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.823 04:14:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:27.823 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.823 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:27.823 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.823 04:14:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.823 04:14:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.823 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.823 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:27.823 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.823 04:14:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:27.823 04:14:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:27.823 04:14:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:27.823 04:14:42 -- host/auth.sh@44 -- # digest=sha512 00:23:27.823 04:14:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.823 04:14:42 -- host/auth.sh@44 -- # keyid=4 00:23:27.823 04:14:42 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:27.823 04:14:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:27.823 04:14:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:27.823 04:14:42 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:27.823 04:14:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:23:27.823 04:14:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:27.823 04:14:42 -- host/auth.sh@68 -- # digest=sha512 00:23:27.823 04:14:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:27.823 04:14:42 -- host/auth.sh@68 -- # keyid=4 00:23:27.823 04:14:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.823 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.823 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:27.823 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.823 04:14:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:27.823 04:14:42 -- nvmf/common.sh@717 -- # local ip 00:23:27.823 04:14:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:27.823 04:14:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:27.823 04:14:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.823 04:14:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.823 04:14:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:27.823 04:14:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.823 04:14:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:27.823 04:14:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:27.823 04:14:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:27.823 04:14:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.823 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.824 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.082 nvme0n1 00:23:28.082 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.082 04:14:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.082 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.082 04:14:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:28.082 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.082 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.082 04:14:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.082 04:14:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.082 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.082 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.082 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.082 04:14:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.082 04:14:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:28.082 04:14:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:28.082 04:14:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:28.082 04:14:42 -- host/auth.sh@44 -- # digest=sha512 00:23:28.082 04:14:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.082 04:14:42 -- host/auth.sh@44 -- # keyid=0 00:23:28.082 04:14:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:28.082 04:14:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:28.082 04:14:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:28.082 04:14:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:28.082 04:14:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:23:28.082 04:14:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:28.082 04:14:42 -- host/auth.sh@68 -- # digest=sha512 00:23:28.082 04:14:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:28.082 04:14:42 -- host/auth.sh@68 -- # keyid=0 00:23:28.082 04:14:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.082 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.082 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.082 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.082 04:14:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:28.082 04:14:42 -- nvmf/common.sh@717 -- # local ip 00:23:28.082 04:14:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:28.082 04:14:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:28.082 04:14:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.082 04:14:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.082 04:14:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:28.082 04:14:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.082 04:14:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:28.082 04:14:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:28.082 04:14:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:28.082 04:14:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:28.082 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.082 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.340 nvme0n1 00:23:28.340 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.340 04:14:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.340 04:14:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:28.340 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.340 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.340 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.340 04:14:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.340 04:14:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.340 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.340 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.340 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.340 04:14:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:28.340 04:14:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:28.340 04:14:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:28.340 04:14:42 -- host/auth.sh@44 -- # digest=sha512 00:23:28.340 04:14:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.340 04:14:42 -- host/auth.sh@44 -- # keyid=1 00:23:28.340 04:14:42 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:28.340 04:14:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:28.340 04:14:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:28.340 04:14:42 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:28.340 04:14:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:23:28.340 04:14:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:28.340 04:14:42 -- host/auth.sh@68 -- # digest=sha512 00:23:28.340 04:14:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:28.340 04:14:42 -- host/auth.sh@68 -- # keyid=1 00:23:28.340 04:14:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.340 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.340 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.340 04:14:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.340 04:14:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:28.340 04:14:42 -- nvmf/common.sh@717 -- # local ip 00:23:28.340 04:14:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:28.340 04:14:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:28.340 04:14:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.340 04:14:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.340 04:14:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:28.340 04:14:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.340 04:14:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:28.340 04:14:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:28.340 04:14:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:28.340 04:14:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:28.340 04:14:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.340 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 nvme0n1 00:23:28.597 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.597 04:14:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:28.597 04:14:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.597 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.597 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.597 04:14:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.597 04:14:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.597 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.597 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:28.854 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.854 04:14:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:28.854 04:14:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:28.854 04:14:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:28.854 04:14:43 -- host/auth.sh@44 -- # digest=sha512 00:23:28.854 04:14:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.854 04:14:43 -- host/auth.sh@44 -- # keyid=2 00:23:28.854 04:14:43 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:28.854 04:14:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:28.854 04:14:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:28.854 04:14:43 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:28.854 04:14:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:23:28.854 04:14:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:28.854 04:14:43 -- host/auth.sh@68 -- # digest=sha512 00:23:28.854 04:14:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:28.854 04:14:43 -- host/auth.sh@68 -- # keyid=2 00:23:28.854 04:14:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.854 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.854 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:28.854 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.854 04:14:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:28.854 04:14:43 -- nvmf/common.sh@717 -- # local ip 00:23:28.854 04:14:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:28.854 04:14:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:28.854 04:14:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.854 04:14:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.854 04:14:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:28.854 04:14:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.854 04:14:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:28.854 04:14:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:28.854 04:14:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:28.854 04:14:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:28.854 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.854 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.111 nvme0n1 00:23:29.111 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.111 04:14:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.111 04:14:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:29.111 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.111 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.111 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.111 04:14:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.111 04:14:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.111 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.111 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.111 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.111 04:14:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:29.111 04:14:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:29.111 04:14:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:29.111 04:14:43 -- host/auth.sh@44 -- # digest=sha512 00:23:29.111 04:14:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.111 04:14:43 -- host/auth.sh@44 -- # keyid=3 00:23:29.111 04:14:43 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:29.111 04:14:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:29.111 04:14:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:29.111 04:14:43 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:29.111 04:14:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:23:29.111 04:14:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:29.111 04:14:43 -- host/auth.sh@68 -- # digest=sha512 00:23:29.111 04:14:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:29.111 04:14:43 -- host/auth.sh@68 -- # keyid=3 00:23:29.111 04:14:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.111 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.111 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.111 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.111 04:14:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:29.111 04:14:43 -- nvmf/common.sh@717 -- # local ip 00:23:29.111 04:14:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:29.111 04:14:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:29.111 04:14:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.111 04:14:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.111 04:14:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:29.111 04:14:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.111 04:14:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:29.111 04:14:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:29.111 04:14:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:29.111 04:14:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:29.111 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.111 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.368 nvme0n1 00:23:29.368 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.368 04:14:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.368 04:14:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:29.368 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.368 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.369 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.369 04:14:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.369 04:14:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.369 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.369 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.369 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.369 04:14:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:29.369 04:14:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:29.369 04:14:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:29.369 04:14:43 -- host/auth.sh@44 -- # digest=sha512 00:23:29.369 04:14:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.369 04:14:43 -- host/auth.sh@44 -- # keyid=4 00:23:29.369 04:14:43 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:29.369 04:14:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:29.369 04:14:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:29.369 04:14:43 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:29.369 04:14:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:23:29.369 04:14:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:29.369 04:14:43 -- host/auth.sh@68 -- # digest=sha512 00:23:29.369 04:14:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:29.369 04:14:43 -- host/auth.sh@68 -- # keyid=4 00:23:29.369 04:14:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.369 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.369 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.369 04:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.369 04:14:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:29.369 04:14:43 -- nvmf/common.sh@717 -- # local ip 00:23:29.369 04:14:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:29.369 04:14:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:29.369 04:14:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.369 04:14:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.369 04:14:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:29.369 04:14:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.369 04:14:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:29.369 04:14:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:29.369 04:14:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:29.369 04:14:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.369 04:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.369 04:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:29.626 nvme0n1 00:23:29.626 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.626 04:14:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:29.626 04:14:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.626 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.626 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:29.626 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.626 04:14:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.626 04:14:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.626 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.626 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:29.883 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.883 04:14:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.883 04:14:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:29.883 04:14:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:29.883 04:14:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:29.883 04:14:44 -- host/auth.sh@44 -- # digest=sha512 00:23:29.883 04:14:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.883 04:14:44 -- host/auth.sh@44 -- # keyid=0 00:23:29.883 04:14:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:29.883 04:14:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:29.883 04:14:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:29.883 04:14:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:29.883 04:14:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:23:29.883 04:14:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:29.883 04:14:44 -- host/auth.sh@68 -- # digest=sha512 00:23:29.883 04:14:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:29.883 04:14:44 -- host/auth.sh@68 -- # keyid=0 00:23:29.883 04:14:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.883 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.883 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:29.883 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.883 04:14:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:29.883 04:14:44 -- nvmf/common.sh@717 -- # local ip 00:23:29.883 04:14:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:29.883 04:14:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:29.883 04:14:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.883 04:14:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.883 04:14:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:29.883 04:14:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.883 04:14:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:29.883 04:14:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:29.883 04:14:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:29.883 04:14:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:29.883 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.883 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.140 nvme0n1 00:23:30.140 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.140 04:14:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.140 04:14:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:30.140 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.140 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.140 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.398 04:14:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.398 04:14:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.398 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.398 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.398 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.398 04:14:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:30.398 04:14:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:30.398 04:14:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:30.398 04:14:44 -- host/auth.sh@44 -- # digest=sha512 00:23:30.398 04:14:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.398 04:14:44 -- host/auth.sh@44 -- # keyid=1 00:23:30.398 04:14:44 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:30.398 04:14:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:30.398 04:14:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:30.398 04:14:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:30.398 04:14:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:23:30.398 04:14:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:30.398 04:14:44 -- host/auth.sh@68 -- # digest=sha512 00:23:30.398 04:14:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:30.398 04:14:44 -- host/auth.sh@68 -- # keyid=1 00:23:30.398 04:14:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.398 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.398 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.398 04:14:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.398 04:14:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:30.398 04:14:44 -- nvmf/common.sh@717 -- # local ip 00:23:30.398 04:14:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:30.398 04:14:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:30.398 04:14:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.398 04:14:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.398 04:14:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:30.398 04:14:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.398 04:14:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:30.398 04:14:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:30.398 04:14:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:30.398 04:14:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:30.398 04:14:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.398 04:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.656 nvme0n1 00:23:30.656 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.656 04:14:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.656 04:14:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:30.656 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.656 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:30.914 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.914 04:14:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.914 04:14:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.914 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.914 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:30.914 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.914 04:14:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:30.914 04:14:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:30.914 04:14:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:30.914 04:14:45 -- host/auth.sh@44 -- # digest=sha512 00:23:30.914 04:14:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.914 04:14:45 -- host/auth.sh@44 -- # keyid=2 00:23:30.914 04:14:45 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:30.914 04:14:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:30.914 04:14:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:30.914 04:14:45 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:30.914 04:14:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:23:30.914 04:14:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:30.914 04:14:45 -- host/auth.sh@68 -- # digest=sha512 00:23:30.914 04:14:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:30.914 04:14:45 -- host/auth.sh@68 -- # keyid=2 00:23:30.914 04:14:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.914 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.914 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:30.914 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.914 04:14:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:30.914 04:14:45 -- nvmf/common.sh@717 -- # local ip 00:23:30.914 04:14:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:30.914 04:14:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:30.914 04:14:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.914 04:14:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.914 04:14:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:30.914 04:14:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.914 04:14:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:30.914 04:14:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:30.914 04:14:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:30.914 04:14:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:30.914 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.914 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:31.480 nvme0n1 00:23:31.480 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.480 04:14:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.480 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.480 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:31.480 04:14:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:31.480 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.480 04:14:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.480 04:14:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.480 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.480 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:31.480 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.480 04:14:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:31.480 04:14:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:31.480 04:14:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:31.480 04:14:45 -- host/auth.sh@44 -- # digest=sha512 00:23:31.480 04:14:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.480 04:14:45 -- host/auth.sh@44 -- # keyid=3 00:23:31.480 04:14:45 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:31.480 04:14:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:31.480 04:14:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:31.480 04:14:45 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:31.480 04:14:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:23:31.481 04:14:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:31.481 04:14:45 -- host/auth.sh@68 -- # digest=sha512 00:23:31.481 04:14:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:31.481 04:14:45 -- host/auth.sh@68 -- # keyid=3 00:23:31.481 04:14:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.481 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.481 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:31.481 04:14:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.481 04:14:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:31.481 04:14:45 -- nvmf/common.sh@717 -- # local ip 00:23:31.481 04:14:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:31.481 04:14:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:31.481 04:14:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.481 04:14:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.481 04:14:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:31.481 04:14:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.481 04:14:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:31.481 04:14:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:31.481 04:14:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:31.481 04:14:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:31.481 04:14:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.481 04:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:31.738 nvme0n1 00:23:31.738 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.738 04:14:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.738 04:14:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:31.738 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.738 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:31.996 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.996 04:14:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.996 04:14:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.996 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.996 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:31.996 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.996 04:14:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:31.996 04:14:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:31.996 04:14:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:31.996 04:14:46 -- host/auth.sh@44 -- # digest=sha512 00:23:31.996 04:14:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.996 04:14:46 -- host/auth.sh@44 -- # keyid=4 00:23:31.996 04:14:46 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:31.996 04:14:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:31.996 04:14:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:31.996 04:14:46 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:31.996 04:14:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:23:31.996 04:14:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:31.996 04:14:46 -- host/auth.sh@68 -- # digest=sha512 00:23:31.996 04:14:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:31.996 04:14:46 -- host/auth.sh@68 -- # keyid=4 00:23:31.996 04:14:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.996 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.996 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:31.996 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.996 04:14:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:31.996 04:14:46 -- nvmf/common.sh@717 -- # local ip 00:23:31.996 04:14:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:31.996 04:14:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:31.996 04:14:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.996 04:14:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.996 04:14:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:31.996 04:14:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.996 04:14:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:31.996 04:14:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:31.996 04:14:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:31.996 04:14:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.996 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.996 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.562 nvme0n1 00:23:32.562 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.562 04:14:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.562 04:14:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:32.562 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.562 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.562 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.562 04:14:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.562 04:14:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.562 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.562 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.562 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.562 04:14:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.562 04:14:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:32.562 04:14:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:32.562 04:14:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:32.562 04:14:46 -- host/auth.sh@44 -- # digest=sha512 00:23:32.562 04:14:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.562 04:14:46 -- host/auth.sh@44 -- # keyid=0 00:23:32.562 04:14:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:32.562 04:14:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:32.562 04:14:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:32.562 04:14:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZWYyZTUzZGQyNDkyMDNlMDE5MmNmM2UwY2JlYzllN2XOsd8M: 00:23:32.562 04:14:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:23:32.562 04:14:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:32.562 04:14:46 -- host/auth.sh@68 -- # digest=sha512 00:23:32.562 04:14:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:32.562 04:14:46 -- host/auth.sh@68 -- # keyid=0 00:23:32.562 04:14:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:32.562 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.562 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:32.562 04:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.562 04:14:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:32.562 04:14:46 -- nvmf/common.sh@717 -- # local ip 00:23:32.562 04:14:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:32.562 04:14:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:32.562 04:14:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.562 04:14:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.562 04:14:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:32.562 04:14:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.562 04:14:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:32.562 04:14:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:32.562 04:14:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:32.562 04:14:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:32.563 04:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.563 04:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:33.127 nvme0n1 00:23:33.385 04:14:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.385 04:14:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.385 04:14:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.385 04:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:33.385 04:14:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:33.385 04:14:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.385 04:14:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.385 04:14:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.385 04:14:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.385 04:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:33.385 04:14:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.385 04:14:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:33.385 04:14:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:33.385 04:14:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:33.385 04:14:47 -- host/auth.sh@44 -- # digest=sha512 00:23:33.385 04:14:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.385 04:14:47 -- host/auth.sh@44 -- # keyid=1 00:23:33.385 04:14:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:33.385 04:14:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:33.385 04:14:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:33.385 04:14:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:33.385 04:14:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:23:33.385 04:14:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:33.385 04:14:47 -- host/auth.sh@68 -- # digest=sha512 00:23:33.385 04:14:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:33.385 04:14:47 -- host/auth.sh@68 -- # keyid=1 00:23:33.385 04:14:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.385 04:14:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.385 04:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:33.386 04:14:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.386 04:14:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:33.386 04:14:47 -- nvmf/common.sh@717 -- # local ip 00:23:33.386 04:14:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:33.386 04:14:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:33.386 04:14:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.386 04:14:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.386 04:14:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:33.386 04:14:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.386 04:14:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:33.386 04:14:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:33.386 04:14:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:33.386 04:14:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:33.386 04:14:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.386 04:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 nvme0n1 00:23:34.318 04:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.318 04:14:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.318 04:14:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.318 04:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.318 04:14:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 04:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.318 04:14:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.318 04:14:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.318 04:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.318 04:14:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 04:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.318 04:14:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:34.318 04:14:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:34.318 04:14:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:34.318 04:14:48 -- host/auth.sh@44 -- # digest=sha512 00:23:34.318 04:14:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.318 04:14:48 -- host/auth.sh@44 -- # keyid=2 00:23:34.318 04:14:48 -- host/auth.sh@45 -- # key=DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:34.318 04:14:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:34.318 04:14:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:34.318 04:14:48 -- host/auth.sh@49 -- # echo DHHC-1:01:ODQ2MzQxMzdhMDNkNmRiZWM0ODkyMDQ1Yzc5M2FkMWZCYOvd: 00:23:34.318 04:14:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:23:34.318 04:14:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:34.318 04:14:48 -- host/auth.sh@68 -- # digest=sha512 00:23:34.318 04:14:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:34.318 04:14:48 -- host/auth.sh@68 -- # keyid=2 00:23:34.318 04:14:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.318 04:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.318 04:14:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 04:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.318 04:14:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:34.318 04:14:48 -- nvmf/common.sh@717 -- # local ip 00:23:34.318 04:14:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:34.318 04:14:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:34.318 04:14:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.318 04:14:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.318 04:14:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:34.318 04:14:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.318 04:14:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:34.318 04:14:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:34.318 04:14:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:34.318 04:14:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:34.318 04:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.318 04:14:48 -- common/autotest_common.sh@10 -- # set +x 00:23:34.884 nvme0n1 00:23:34.884 04:14:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.884 04:14:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.884 04:14:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.884 04:14:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:34.884 04:14:49 -- common/autotest_common.sh@10 -- # set +x 00:23:34.884 04:14:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.142 04:14:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.142 04:14:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.142 04:14:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.142 04:14:49 -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 04:14:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.142 04:14:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:35.142 04:14:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:35.142 04:14:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:35.142 04:14:49 -- host/auth.sh@44 -- # digest=sha512 00:23:35.142 04:14:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.142 04:14:49 -- host/auth.sh@44 -- # keyid=3 00:23:35.142 04:14:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:35.142 04:14:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:35.142 04:14:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:35.142 04:14:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MzdmMTk3OTVjZWYwNWM3OGJlZTdjYWFhMzVjNTliYzUzM2M0MGJjODhlN2RlOTUyZEw7GQ==: 00:23:35.142 04:14:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:23:35.142 04:14:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:35.142 04:14:49 -- host/auth.sh@68 -- # digest=sha512 00:23:35.142 04:14:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:35.142 04:14:49 -- host/auth.sh@68 -- # keyid=3 00:23:35.142 04:14:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.142 04:14:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.142 04:14:49 -- common/autotest_common.sh@10 -- # set +x 00:23:35.142 04:14:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.142 04:14:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:35.142 04:14:49 -- nvmf/common.sh@717 -- # local ip 00:23:35.142 04:14:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:35.142 04:14:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:35.142 04:14:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.142 04:14:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.142 04:14:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:35.142 04:14:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.142 04:14:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:35.142 04:14:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:35.142 04:14:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:35.142 04:14:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:35.142 04:14:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.142 04:14:49 -- common/autotest_common.sh@10 -- # set +x 00:23:36.077 nvme0n1 00:23:36.077 04:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.077 04:14:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.077 04:14:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:36.077 04:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.077 04:14:50 -- common/autotest_common.sh@10 -- # set +x 00:23:36.077 04:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.077 04:14:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.077 04:14:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.077 04:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.077 04:14:50 -- common/autotest_common.sh@10 -- # set +x 00:23:36.077 04:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.077 04:14:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:36.077 04:14:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:36.077 04:14:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:36.077 04:14:50 -- host/auth.sh@44 -- # digest=sha512 00:23:36.077 04:14:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.077 04:14:50 -- host/auth.sh@44 -- # keyid=4 00:23:36.077 04:14:50 -- host/auth.sh@45 -- # key=DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:36.077 04:14:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:36.077 04:14:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:36.077 04:14:50 -- host/auth.sh@49 -- # echo DHHC-1:03:MmNlNmE3ZmE1MmI0MGFhZThlYTEzZTkyZWU1Zjc0YmZkYmJkZDhiOGFiNmNkZjhiMGYxNzk5MTM3MWI5N2VhNkutdLQ=: 00:23:36.077 04:14:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:23:36.077 04:14:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:36.077 04:14:50 -- host/auth.sh@68 -- # digest=sha512 00:23:36.077 04:14:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:36.077 04:14:50 -- host/auth.sh@68 -- # keyid=4 00:23:36.077 04:14:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.077 04:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.077 04:14:50 -- common/autotest_common.sh@10 -- # set +x 00:23:36.077 04:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.077 04:14:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:36.077 04:14:50 -- nvmf/common.sh@717 -- # local ip 00:23:36.077 04:14:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:36.077 04:14:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:36.077 04:14:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.077 04:14:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.077 04:14:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:36.077 04:14:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.077 04:14:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:36.077 04:14:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:36.077 04:14:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:36.077 04:14:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.077 04:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.077 04:14:50 -- common/autotest_common.sh@10 -- # set +x 00:23:36.692 nvme0n1 00:23:36.692 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.692 04:14:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.692 04:14:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:36.692 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.692 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.692 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.692 04:14:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.692 04:14:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.693 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.693 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.693 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.693 04:14:51 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:36.693 04:14:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:36.693 04:14:51 -- host/auth.sh@44 -- # digest=sha256 00:23:36.693 04:14:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.693 04:14:51 -- host/auth.sh@44 -- # keyid=1 00:23:36.693 04:14:51 -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:36.693 04:14:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:36.693 04:14:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:36.693 04:14:51 -- host/auth.sh@49 -- # echo DHHC-1:00:NDNjYjliZTljNTdlZTZhNmM3YjYyMmIyMGQzNmM5NzJlNDhhYWJkYzA4ZmY3ODlhle0hIQ==: 00:23:36.693 04:14:51 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:36.693 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.693 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.976 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.976 04:14:51 -- host/auth.sh@119 -- # get_main_ns_ip 00:23:36.976 04:14:51 -- nvmf/common.sh@717 -- # local ip 00:23:36.976 04:14:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:36.976 04:14:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:36.976 04:14:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.976 04:14:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.976 04:14:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:36.976 04:14:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.976 04:14:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:36.976 04:14:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:36.976 04:14:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:36.976 04:14:51 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.976 04:14:51 -- common/autotest_common.sh@638 -- # local es=0 00:23:36.976 04:14:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.976 04:14:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:36.976 04:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.976 04:14:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:36.976 04:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.976 04:14:51 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.976 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.976 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.976 request: 00:23:36.976 { 00:23:36.976 "name": "nvme0", 00:23:36.976 "trtype": "tcp", 00:23:36.976 "traddr": "10.0.0.1", 00:23:36.976 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:36.976 "adrfam": "ipv4", 00:23:36.976 "trsvcid": "4420", 00:23:36.976 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:36.976 "method": "bdev_nvme_attach_controller", 00:23:36.976 "req_id": 1 00:23:36.976 } 00:23:36.976 Got JSON-RPC error response 00:23:36.976 response: 00:23:36.976 { 00:23:36.976 "code": -32602, 00:23:36.976 "message": "Invalid parameters" 00:23:36.976 } 00:23:36.976 04:14:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:36.976 04:14:51 -- common/autotest_common.sh@641 -- # es=1 00:23:36.976 04:14:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:36.976 04:14:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:36.976 04:14:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:36.976 04:14:51 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.976 04:14:51 -- host/auth.sh@121 -- # jq length 00:23:36.976 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.976 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.977 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.977 04:14:51 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:23:36.977 04:14:51 -- host/auth.sh@124 -- # get_main_ns_ip 00:23:36.977 04:14:51 -- nvmf/common.sh@717 -- # local ip 00:23:36.977 04:14:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:36.977 04:14:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:36.977 04:14:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.977 04:14:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.977 04:14:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:36.977 04:14:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.977 04:14:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:36.977 04:14:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:36.977 04:14:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:36.977 04:14:51 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.977 04:14:51 -- common/autotest_common.sh@638 -- # local es=0 00:23:36.977 04:14:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.977 04:14:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:36.977 04:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.977 04:14:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:36.977 04:14:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:36.977 04:14:51 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.977 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.977 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.977 request: 00:23:36.977 { 00:23:36.977 "name": "nvme0", 00:23:36.977 "trtype": "tcp", 00:23:36.977 "traddr": "10.0.0.1", 00:23:36.977 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:36.977 "adrfam": "ipv4", 00:23:36.977 "trsvcid": "4420", 00:23:36.977 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:36.977 "dhchap_key": "key2", 00:23:36.977 "method": "bdev_nvme_attach_controller", 00:23:36.977 "req_id": 1 00:23:36.977 } 00:23:36.977 Got JSON-RPC error response 00:23:36.977 response: 00:23:36.977 { 00:23:36.977 "code": -32602, 00:23:36.977 "message": "Invalid parameters" 00:23:36.977 } 00:23:36.977 04:14:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:36.977 04:14:51 -- common/autotest_common.sh@641 -- # es=1 00:23:36.977 04:14:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:36.977 04:14:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:36.977 04:14:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:36.977 04:14:51 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.977 04:14:51 -- host/auth.sh@127 -- # jq length 00:23:36.977 04:14:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.977 04:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.977 04:14:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.977 04:14:51 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:23:36.977 04:14:51 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:23:36.977 04:14:51 -- host/auth.sh@130 -- # cleanup 00:23:36.977 04:14:51 -- host/auth.sh@24 -- # nvmftestfini 00:23:36.977 04:14:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:36.977 04:14:51 -- nvmf/common.sh@117 -- # sync 00:23:36.977 04:14:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.977 04:14:51 -- nvmf/common.sh@120 -- # set +e 00:23:36.977 04:14:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.977 04:14:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.977 rmmod nvme_tcp 00:23:36.977 rmmod nvme_fabrics 00:23:36.977 04:14:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.977 04:14:51 -- nvmf/common.sh@124 -- # set -e 00:23:36.977 04:14:51 -- nvmf/common.sh@125 -- # return 0 00:23:36.977 04:14:51 -- nvmf/common.sh@478 -- # '[' -n 3934234 ']' 00:23:36.977 04:14:51 -- nvmf/common.sh@479 -- # killprocess 3934234 00:23:36.977 04:14:51 -- common/autotest_common.sh@936 -- # '[' -z 3934234 ']' 00:23:36.977 04:14:51 -- common/autotest_common.sh@940 -- # kill -0 3934234 00:23:36.977 04:14:51 -- common/autotest_common.sh@941 -- # uname 00:23:36.977 04:14:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.977 04:14:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3934234 00:23:37.235 04:14:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.236 04:14:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.236 04:14:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3934234' 00:23:37.236 killing process with pid 3934234 00:23:37.236 04:14:51 -- common/autotest_common.sh@955 -- # kill 3934234 00:23:37.236 04:14:51 -- common/autotest_common.sh@960 -- # wait 3934234 00:23:37.236 04:14:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:37.236 04:14:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:37.236 04:14:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:37.236 04:14:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.236 04:14:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.236 04:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.236 04:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.236 04:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.766 04:14:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.766 04:14:53 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:39.766 04:14:53 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:39.766 04:14:53 -- host/auth.sh@27 -- # clean_kernel_target 00:23:39.766 04:14:53 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:39.766 04:14:53 -- nvmf/common.sh@675 -- # echo 0 00:23:39.766 04:14:53 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:39.766 04:14:53 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:39.766 04:14:53 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:39.766 04:14:53 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:39.766 04:14:53 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:39.766 04:14:53 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:39.766 04:14:53 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:42.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:42.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:43.239 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:23:43.239 04:14:57 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.iwe /tmp/spdk.key-null.YuR /tmp/spdk.key-sha256.qA0 /tmp/spdk.key-sha384.ocb /tmp/spdk.key-sha512.vPJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:43.239 04:14:57 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:46.522 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:46.522 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:46.522 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:46.522 00:23:46.522 real 0m55.180s 00:23:46.522 user 0m50.078s 00:23:46.522 sys 0m12.102s 00:23:46.522 04:15:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:46.522 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 ************************************ 00:23:46.522 END TEST nvmf_auth 00:23:46.522 ************************************ 00:23:46.522 04:15:00 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:23:46.522 04:15:00 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:46.522 04:15:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:46.522 04:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:46.522 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 ************************************ 00:23:46.522 START TEST nvmf_digest 00:23:46.522 ************************************ 00:23:46.522 04:15:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:46.522 * Looking for test storage... 00:23:46.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.522 04:15:00 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.522 04:15:00 -- nvmf/common.sh@7 -- # uname -s 00:23:46.522 04:15:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.522 04:15:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.522 04:15:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.522 04:15:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.522 04:15:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.522 04:15:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.522 04:15:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.522 04:15:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.522 04:15:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.522 04:15:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.522 04:15:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:46.522 04:15:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:46.522 04:15:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.522 04:15:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.522 04:15:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.522 04:15:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.522 04:15:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.522 04:15:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.522 04:15:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.522 04:15:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.522 04:15:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.522 04:15:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.522 04:15:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.522 04:15:00 -- paths/export.sh@5 -- # export PATH 00:23:46.522 04:15:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.522 04:15:00 -- nvmf/common.sh@47 -- # : 0 00:23:46.522 04:15:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.522 04:15:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.522 04:15:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.522 04:15:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.522 04:15:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.522 04:15:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.522 04:15:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.522 04:15:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.522 04:15:00 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:46.522 04:15:00 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:46.522 04:15:00 -- host/digest.sh@16 -- # runtime=2 00:23:46.522 04:15:00 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:46.522 04:15:00 -- host/digest.sh@138 -- # nvmftestinit 00:23:46.522 04:15:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:46.522 04:15:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.522 04:15:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:46.522 04:15:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:46.522 04:15:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:46.522 04:15:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.522 04:15:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.522 04:15:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.522 04:15:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:46.522 04:15:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:46.522 04:15:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.522 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:23:53.082 04:15:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:53.082 04:15:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.082 04:15:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.082 04:15:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.082 04:15:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.082 04:15:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.082 04:15:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.082 04:15:06 -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.082 04:15:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.082 04:15:06 -- nvmf/common.sh@296 -- # e810=() 00:23:53.082 04:15:06 -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.082 04:15:06 -- nvmf/common.sh@297 -- # x722=() 00:23:53.082 04:15:06 -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.082 04:15:06 -- nvmf/common.sh@298 -- # mlx=() 00:23:53.082 04:15:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.082 04:15:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.082 04:15:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.082 04:15:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.082 04:15:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.082 04:15:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.082 04:15:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.082 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.082 04:15:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.082 04:15:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.082 04:15:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.082 04:15:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.082 04:15:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.082 04:15:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.082 04:15:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:53.082 04:15:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.082 04:15:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.082 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.082 04:15:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.082 04:15:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.082 04:15:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.082 04:15:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:53.082 04:15:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.082 04:15:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.082 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.082 04:15:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.082 04:15:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:53.083 04:15:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:53.083 04:15:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:53.083 04:15:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:53.083 04:15:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:53.083 04:15:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.083 04:15:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.083 04:15:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.083 04:15:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.083 04:15:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.083 04:15:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.083 04:15:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.083 04:15:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.083 04:15:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.083 04:15:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.083 04:15:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.083 04:15:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.083 04:15:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.083 04:15:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.083 04:15:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.083 04:15:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.083 04:15:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.083 04:15:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.083 04:15:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.083 04:15:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:23:53.083 00:23:53.083 --- 10.0.0.2 ping statistics --- 00:23:53.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.083 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:53.083 04:15:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:23:53.083 00:23:53.083 --- 10.0.0.1 ping statistics --- 00:23:53.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.083 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:53.083 04:15:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.083 04:15:06 -- nvmf/common.sh@411 -- # return 0 00:23:53.083 04:15:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:53.083 04:15:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.083 04:15:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:53.083 04:15:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:53.083 04:15:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.083 04:15:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:53.083 04:15:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:53.083 04:15:06 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:53.083 04:15:06 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:53.083 04:15:06 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:53.083 04:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:53.083 04:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.083 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:23:53.083 ************************************ 00:23:53.083 START TEST nvmf_digest_clean 00:23:53.083 ************************************ 00:23:53.083 04:15:06 -- common/autotest_common.sh@1111 -- # run_digest 00:23:53.083 04:15:06 -- host/digest.sh@120 -- # local dsa_initiator 00:23:53.083 04:15:06 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:53.083 04:15:06 -- host/digest.sh@121 -- # dsa_initiator=false 00:23:53.083 04:15:06 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:53.083 04:15:06 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:53.083 04:15:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:53.083 04:15:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:53.083 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:23:53.083 04:15:06 -- nvmf/common.sh@470 -- # nvmfpid=3949650 00:23:53.083 04:15:06 -- nvmf/common.sh@471 -- # waitforlisten 3949650 00:23:53.083 04:15:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:53.083 04:15:06 -- common/autotest_common.sh@817 -- # '[' -z 3949650 ']' 00:23:53.083 04:15:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.083 04:15:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:53.083 04:15:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.083 04:15:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:53.083 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:23:53.083 [2024-04-19 04:15:06.802654] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:23:53.083 [2024-04-19 04:15:06.802705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.083 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.083 [2024-04-19 04:15:06.887942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.083 [2024-04-19 04:15:06.980038] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.083 [2024-04-19 04:15:06.980080] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.083 [2024-04-19 04:15:06.980090] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.083 [2024-04-19 04:15:06.980099] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.083 [2024-04-19 04:15:06.980106] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.083 [2024-04-19 04:15:06.980132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.342 04:15:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:53.342 04:15:07 -- common/autotest_common.sh@850 -- # return 0 00:23:53.342 04:15:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:53.342 04:15:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:53.342 04:15:07 -- common/autotest_common.sh@10 -- # set +x 00:23:53.342 04:15:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.342 04:15:07 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:53.342 04:15:07 -- host/digest.sh@126 -- # common_target_config 00:23:53.342 04:15:07 -- host/digest.sh@43 -- # rpc_cmd 00:23:53.342 04:15:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.342 04:15:07 -- common/autotest_common.sh@10 -- # set +x 00:23:53.342 null0 00:23:53.342 [2024-04-19 04:15:07.861997] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.600 [2024-04-19 04:15:07.886184] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.600 04:15:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.600 04:15:07 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:53.600 04:15:07 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:53.600 04:15:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:53.601 04:15:07 -- host/digest.sh@80 -- # rw=randread 00:23:53.601 04:15:07 -- host/digest.sh@80 -- # bs=4096 00:23:53.601 04:15:07 -- host/digest.sh@80 -- # qd=128 00:23:53.601 04:15:07 -- host/digest.sh@80 -- # scan_dsa=false 00:23:53.601 04:15:07 -- host/digest.sh@83 -- # bperfpid=3949958 00:23:53.601 04:15:07 -- host/digest.sh@84 -- # waitforlisten 3949958 /var/tmp/bperf.sock 00:23:53.601 04:15:07 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:53.601 04:15:07 -- common/autotest_common.sh@817 -- # '[' -z 3949958 ']' 00:23:53.601 04:15:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:53.601 04:15:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:53.601 04:15:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:53.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:53.601 04:15:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:53.601 04:15:07 -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 [2024-04-19 04:15:07.939483] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:23:53.601 [2024-04-19 04:15:07.939538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949958 ] 00:23:53.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.601 [2024-04-19 04:15:08.012500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.601 [2024-04-19 04:15:08.098793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.859 04:15:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:53.859 04:15:08 -- common/autotest_common.sh@850 -- # return 0 00:23:53.859 04:15:08 -- host/digest.sh@86 -- # false 00:23:53.859 04:15:08 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:53.859 04:15:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:54.123 04:15:08 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.124 04:15:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.388 nvme0n1 00:23:54.388 04:15:08 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:54.388 04:15:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.646 Running I/O for 2 seconds... 00:23:56.549 00:23:56.549 Latency(us) 00:23:56.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.549 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:56.549 nvme0n1 : 2.01 16423.70 64.16 0.00 0.00 7784.62 3932.16 19660.80 00:23:56.549 =================================================================================================================== 00:23:56.549 Total : 16423.70 64.16 0.00 0.00 7784.62 3932.16 19660.80 00:23:56.549 0 00:23:56.549 04:15:10 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:56.550 04:15:10 -- host/digest.sh@93 -- # get_accel_stats 00:23:56.550 04:15:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:56.550 04:15:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:56.550 | select(.opcode=="crc32c") 00:23:56.550 | "\(.module_name) \(.executed)"' 00:23:56.550 04:15:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:56.808 04:15:11 -- host/digest.sh@94 -- # false 00:23:56.808 04:15:11 -- host/digest.sh@94 -- # exp_module=software 00:23:56.808 04:15:11 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:56.808 04:15:11 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:56.808 04:15:11 -- host/digest.sh@98 -- # killprocess 3949958 00:23:56.808 04:15:11 -- common/autotest_common.sh@936 -- # '[' -z 3949958 ']' 00:23:56.808 04:15:11 -- common/autotest_common.sh@940 -- # kill -0 3949958 00:23:56.808 04:15:11 -- common/autotest_common.sh@941 -- # uname 00:23:56.808 04:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:56.808 04:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3949958 00:23:56.808 04:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:56.808 04:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:56.808 04:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3949958' 00:23:56.808 killing process with pid 3949958 00:23:56.808 04:15:11 -- common/autotest_common.sh@955 -- # kill 3949958 00:23:56.808 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.808 00:23:56.808 Latency(us) 00:23:56.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.808 =================================================================================================================== 00:23:56.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.808 04:15:11 -- common/autotest_common.sh@960 -- # wait 3949958 00:23:57.066 04:15:11 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:57.066 04:15:11 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:57.066 04:15:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:57.066 04:15:11 -- host/digest.sh@80 -- # rw=randread 00:23:57.066 04:15:11 -- host/digest.sh@80 -- # bs=131072 00:23:57.066 04:15:11 -- host/digest.sh@80 -- # qd=16 00:23:57.066 04:15:11 -- host/digest.sh@80 -- # scan_dsa=false 00:23:57.066 04:15:11 -- host/digest.sh@83 -- # bperfpid=3950632 00:23:57.066 04:15:11 -- host/digest.sh@84 -- # waitforlisten 3950632 /var/tmp/bperf.sock 00:23:57.066 04:15:11 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:57.066 04:15:11 -- common/autotest_common.sh@817 -- # '[' -z 3950632 ']' 00:23:57.066 04:15:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.066 04:15:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:57.066 04:15:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.066 04:15:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:57.066 04:15:11 -- common/autotest_common.sh@10 -- # set +x 00:23:57.066 [2024-04-19 04:15:11.528600] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:23:57.066 [2024-04-19 04:15:11.528664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950632 ] 00:23:57.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.066 Zero copy mechanism will not be used. 00:23:57.066 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.323 [2024-04-19 04:15:11.601293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.323 [2024-04-19 04:15:11.691540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.323 04:15:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:57.323 04:15:11 -- common/autotest_common.sh@850 -- # return 0 00:23:57.323 04:15:11 -- host/digest.sh@86 -- # false 00:23:57.323 04:15:11 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:57.323 04:15:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:57.581 04:15:12 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.581 04:15:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.147 nvme0n1 00:23:58.147 04:15:12 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:58.147 04:15:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.147 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.147 Zero copy mechanism will not be used. 00:23:58.147 Running I/O for 2 seconds... 00:24:00.679 00:24:00.679 Latency(us) 00:24:00.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:00.679 nvme0n1 : 2.01 3742.89 467.86 0.00 0.00 4270.81 1340.51 11856.06 00:24:00.679 =================================================================================================================== 00:24:00.679 Total : 3742.89 467.86 0.00 0.00 4270.81 1340.51 11856.06 00:24:00.679 0 00:24:00.679 04:15:14 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:00.679 04:15:14 -- host/digest.sh@93 -- # get_accel_stats 00:24:00.679 04:15:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:00.679 04:15:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:00.679 | select(.opcode=="crc32c") 00:24:00.679 | "\(.module_name) \(.executed)"' 00:24:00.679 04:15:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:00.679 04:15:14 -- host/digest.sh@94 -- # false 00:24:00.679 04:15:14 -- host/digest.sh@94 -- # exp_module=software 00:24:00.679 04:15:14 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:00.679 04:15:14 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:00.679 04:15:14 -- host/digest.sh@98 -- # killprocess 3950632 00:24:00.679 04:15:14 -- common/autotest_common.sh@936 -- # '[' -z 3950632 ']' 00:24:00.679 04:15:14 -- common/autotest_common.sh@940 -- # kill -0 3950632 00:24:00.679 04:15:14 -- common/autotest_common.sh@941 -- # uname 00:24:00.679 04:15:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:00.679 04:15:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3950632 00:24:00.679 04:15:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:00.679 04:15:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:00.679 04:15:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3950632' 00:24:00.679 killing process with pid 3950632 00:24:00.679 04:15:14 -- common/autotest_common.sh@955 -- # kill 3950632 00:24:00.679 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.680 00:24:00.680 Latency(us) 00:24:00.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.680 =================================================================================================================== 00:24:00.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.680 04:15:14 -- common/autotest_common.sh@960 -- # wait 3950632 00:24:00.938 04:15:15 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:00.938 04:15:15 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.938 04:15:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.938 04:15:15 -- host/digest.sh@80 -- # rw=randwrite 00:24:00.938 04:15:15 -- host/digest.sh@80 -- # bs=4096 00:24:00.938 04:15:15 -- host/digest.sh@80 -- # qd=128 00:24:00.938 04:15:15 -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.938 04:15:15 -- host/digest.sh@83 -- # bperfpid=3951290 00:24:00.938 04:15:15 -- host/digest.sh@84 -- # waitforlisten 3951290 /var/tmp/bperf.sock 00:24:00.938 04:15:15 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:00.938 04:15:15 -- common/autotest_common.sh@817 -- # '[' -z 3951290 ']' 00:24:00.938 04:15:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.938 04:15:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.938 04:15:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.938 04:15:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.938 04:15:15 -- common/autotest_common.sh@10 -- # set +x 00:24:00.938 [2024-04-19 04:15:15.269669] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:00.938 [2024-04-19 04:15:15.269728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951290 ] 00:24:00.938 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.938 [2024-04-19 04:15:15.342711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.938 [2024-04-19 04:15:15.426989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.196 04:15:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.196 04:15:15 -- common/autotest_common.sh@850 -- # return 0 00:24:01.196 04:15:15 -- host/digest.sh@86 -- # false 00:24:01.196 04:15:15 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.196 04:15:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.454 04:15:15 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.454 04:15:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.048 nvme0n1 00:24:02.048 04:15:16 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.048 04:15:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.048 Running I/O for 2 seconds... 00:24:03.953 00:24:03.953 Latency(us) 00:24:03.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.953 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:03.953 nvme0n1 : 2.01 17706.77 69.17 0.00 0.00 7216.72 2844.86 11200.70 00:24:03.953 =================================================================================================================== 00:24:03.953 Total : 17706.77 69.17 0.00 0.00 7216.72 2844.86 11200.70 00:24:03.953 0 00:24:03.953 04:15:18 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:03.953 04:15:18 -- host/digest.sh@93 -- # get_accel_stats 00:24:03.953 04:15:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:03.953 04:15:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:03.953 04:15:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:03.953 | select(.opcode=="crc32c") 00:24:03.953 | "\(.module_name) \(.executed)"' 00:24:04.210 04:15:18 -- host/digest.sh@94 -- # false 00:24:04.210 04:15:18 -- host/digest.sh@94 -- # exp_module=software 00:24:04.210 04:15:18 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.210 04:15:18 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.210 04:15:18 -- host/digest.sh@98 -- # killprocess 3951290 00:24:04.210 04:15:18 -- common/autotest_common.sh@936 -- # '[' -z 3951290 ']' 00:24:04.210 04:15:18 -- common/autotest_common.sh@940 -- # kill -0 3951290 00:24:04.210 04:15:18 -- common/autotest_common.sh@941 -- # uname 00:24:04.210 04:15:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.210 04:15:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3951290 00:24:04.210 04:15:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:04.210 04:15:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:04.210 04:15:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3951290' 00:24:04.210 killing process with pid 3951290 00:24:04.210 04:15:18 -- common/autotest_common.sh@955 -- # kill 3951290 00:24:04.210 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.210 00:24:04.210 Latency(us) 00:24:04.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.210 =================================================================================================================== 00:24:04.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.210 04:15:18 -- common/autotest_common.sh@960 -- # wait 3951290 00:24:04.468 04:15:18 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:04.468 04:15:18 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:04.468 04:15:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:04.468 04:15:18 -- host/digest.sh@80 -- # rw=randwrite 00:24:04.468 04:15:18 -- host/digest.sh@80 -- # bs=131072 00:24:04.468 04:15:18 -- host/digest.sh@80 -- # qd=16 00:24:04.468 04:15:18 -- host/digest.sh@80 -- # scan_dsa=false 00:24:04.468 04:15:18 -- host/digest.sh@83 -- # bperfpid=3951955 00:24:04.468 04:15:18 -- host/digest.sh@84 -- # waitforlisten 3951955 /var/tmp/bperf.sock 00:24:04.468 04:15:18 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:04.468 04:15:18 -- common/autotest_common.sh@817 -- # '[' -z 3951955 ']' 00:24:04.468 04:15:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.468 04:15:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:04.468 04:15:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.468 04:15:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:04.468 04:15:18 -- common/autotest_common.sh@10 -- # set +x 00:24:04.468 [2024-04-19 04:15:18.979170] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:04.468 [2024-04-19 04:15:18.979231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951955 ] 00:24:04.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.468 Zero copy mechanism will not be used. 00:24:04.726 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.726 [2024-04-19 04:15:19.051830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.726 [2024-04-19 04:15:19.142766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.726 04:15:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:04.726 04:15:19 -- common/autotest_common.sh@850 -- # return 0 00:24:04.726 04:15:19 -- host/digest.sh@86 -- # false 00:24:04.726 04:15:19 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:04.726 04:15:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:04.983 04:15:19 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:04.983 04:15:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.548 nvme0n1 00:24:05.548 04:15:19 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:05.548 04:15:19 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:05.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.548 Zero copy mechanism will not be used. 00:24:05.548 Running I/O for 2 seconds... 00:24:07.448 00:24:07.448 Latency(us) 00:24:07.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.448 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:07.448 nvme0n1 : 2.00 4149.24 518.65 0.00 0.00 3848.66 2681.02 10783.65 00:24:07.448 =================================================================================================================== 00:24:07.448 Total : 4149.24 518.65 0.00 0.00 3848.66 2681.02 10783.65 00:24:07.448 0 00:24:07.448 04:15:21 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:07.448 04:15:21 -- host/digest.sh@93 -- # get_accel_stats 00:24:07.448 04:15:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:07.448 04:15:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:07.448 | select(.opcode=="crc32c") 00:24:07.448 | "\(.module_name) \(.executed)"' 00:24:07.448 04:15:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:07.706 04:15:22 -- host/digest.sh@94 -- # false 00:24:07.706 04:15:22 -- host/digest.sh@94 -- # exp_module=software 00:24:07.706 04:15:22 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:07.706 04:15:22 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:07.706 04:15:22 -- host/digest.sh@98 -- # killprocess 3951955 00:24:07.706 04:15:22 -- common/autotest_common.sh@936 -- # '[' -z 3951955 ']' 00:24:07.706 04:15:22 -- common/autotest_common.sh@940 -- # kill -0 3951955 00:24:07.706 04:15:22 -- common/autotest_common.sh@941 -- # uname 00:24:07.706 04:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.706 04:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3951955 00:24:07.965 04:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:07.965 04:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:07.965 04:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3951955' 00:24:07.965 killing process with pid 3951955 00:24:07.965 04:15:22 -- common/autotest_common.sh@955 -- # kill 3951955 00:24:07.965 Received shutdown signal, test time was about 2.000000 seconds 00:24:07.965 00:24:07.965 Latency(us) 00:24:07.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.965 =================================================================================================================== 00:24:07.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:07.965 04:15:22 -- common/autotest_common.sh@960 -- # wait 3951955 00:24:07.965 04:15:22 -- host/digest.sh@132 -- # killprocess 3949650 00:24:07.965 04:15:22 -- common/autotest_common.sh@936 -- # '[' -z 3949650 ']' 00:24:07.965 04:15:22 -- common/autotest_common.sh@940 -- # kill -0 3949650 00:24:07.965 04:15:22 -- common/autotest_common.sh@941 -- # uname 00:24:07.965 04:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.965 04:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3949650 00:24:08.223 04:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:08.223 04:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:08.223 04:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3949650' 00:24:08.223 killing process with pid 3949650 00:24:08.223 04:15:22 -- common/autotest_common.sh@955 -- # kill 3949650 00:24:08.223 04:15:22 -- common/autotest_common.sh@960 -- # wait 3949650 00:24:08.481 00:24:08.481 real 0m16.020s 00:24:08.481 user 0m31.484s 00:24:08.481 sys 0m4.117s 00:24:08.481 04:15:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:08.481 04:15:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.481 ************************************ 00:24:08.481 END TEST nvmf_digest_clean 00:24:08.481 ************************************ 00:24:08.481 04:15:22 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:08.481 04:15:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:08.481 04:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.481 04:15:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.481 ************************************ 00:24:08.481 START TEST nvmf_digest_error 00:24:08.481 ************************************ 00:24:08.481 04:15:22 -- common/autotest_common.sh@1111 -- # run_digest_error 00:24:08.481 04:15:22 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:08.481 04:15:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:08.481 04:15:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:08.481 04:15:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.481 04:15:22 -- nvmf/common.sh@470 -- # nvmfpid=3952659 00:24:08.481 04:15:22 -- nvmf/common.sh@471 -- # waitforlisten 3952659 00:24:08.481 04:15:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:08.481 04:15:22 -- common/autotest_common.sh@817 -- # '[' -z 3952659 ']' 00:24:08.481 04:15:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.481 04:15:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.481 04:15:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.481 04:15:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.481 04:15:22 -- common/autotest_common.sh@10 -- # set +x 00:24:08.481 [2024-04-19 04:15:22.987312] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:08.481 [2024-04-19 04:15:22.987370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.743 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.743 [2024-04-19 04:15:23.072688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.743 [2024-04-19 04:15:23.160994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.743 [2024-04-19 04:15:23.161039] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.743 [2024-04-19 04:15:23.161049] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.743 [2024-04-19 04:15:23.161057] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.743 [2024-04-19 04:15:23.161065] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.743 [2024-04-19 04:15:23.161090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.743 04:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.743 04:15:23 -- common/autotest_common.sh@850 -- # return 0 00:24:08.743 04:15:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:08.743 04:15:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:08.743 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:24:08.743 04:15:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.743 04:15:23 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:08.743 04:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.743 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:24:08.743 [2024-04-19 04:15:23.225582] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:08.743 04:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.743 04:15:23 -- host/digest.sh@105 -- # common_target_config 00:24:08.743 04:15:23 -- host/digest.sh@43 -- # rpc_cmd 00:24:08.743 04:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.743 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 null0 00:24:09.003 [2024-04-19 04:15:23.321258] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.003 [2024-04-19 04:15:23.345460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.003 04:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.003 04:15:23 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:09.003 04:15:23 -- host/digest.sh@54 -- # local rw bs qd 00:24:09.003 04:15:23 -- host/digest.sh@56 -- # rw=randread 00:24:09.003 04:15:23 -- host/digest.sh@56 -- # bs=4096 00:24:09.003 04:15:23 -- host/digest.sh@56 -- # qd=128 00:24:09.003 04:15:23 -- host/digest.sh@58 -- # bperfpid=3952754 00:24:09.003 04:15:23 -- host/digest.sh@60 -- # waitforlisten 3952754 /var/tmp/bperf.sock 00:24:09.003 04:15:23 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:09.003 04:15:23 -- common/autotest_common.sh@817 -- # '[' -z 3952754 ']' 00:24:09.003 04:15:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.003 04:15:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.003 04:15:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.003 04:15:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.003 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 [2024-04-19 04:15:23.398296] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:09.003 [2024-04-19 04:15:23.398355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952754 ] 00:24:09.003 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.003 [2024-04-19 04:15:23.471929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.261 [2024-04-19 04:15:23.562322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.261 04:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:09.261 04:15:23 -- common/autotest_common.sh@850 -- # return 0 00:24:09.261 04:15:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.261 04:15:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.519 04:15:23 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.519 04:15:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.519 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.519 04:15:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.519 04:15:23 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.519 04:15:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.083 nvme0n1 00:24:10.083 04:15:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:10.083 04:15:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.083 04:15:24 -- common/autotest_common.sh@10 -- # set +x 00:24:10.083 04:15:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.083 04:15:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:10.083 04:15:24 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.083 Running I/O for 2 seconds... 00:24:10.083 [2024-04-19 04:15:24.535095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.083 [2024-04-19 04:15:24.535136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.083 [2024-04-19 04:15:24.535151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.083 [2024-04-19 04:15:24.552704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.083 [2024-04-19 04:15:24.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.083 [2024-04-19 04:15:24.552749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.083 [2024-04-19 04:15:24.572247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.083 [2024-04-19 04:15:24.572277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.083 [2024-04-19 04:15:24.572289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.083 [2024-04-19 04:15:24.591372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.083 [2024-04-19 04:15:24.591400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.083 [2024-04-19 04:15:24.591412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.083 [2024-04-19 04:15:24.608359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.083 [2024-04-19 04:15:24.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.083 [2024-04-19 04:15:24.608400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.621235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.621262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.621274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.636889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.636916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.636928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.649403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.649429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.649441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.665365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.665397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.665410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.682843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.682870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.682882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.695818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.695846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.695858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.712927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.712956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.712970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.725470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.725497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.725510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.743485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.743513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.743526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.759619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.759646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.759658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.772708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.772734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.772747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.787042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.787070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.787082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.801856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.801883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.801895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.816743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.816771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.816783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.831265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.831292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.831305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.340 [2024-04-19 04:15:24.845209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.340 [2024-04-19 04:15:24.845236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.340 [2024-04-19 04:15:24.845248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.341 [2024-04-19 04:15:24.860722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.341 [2024-04-19 04:15:24.860749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.341 [2024-04-19 04:15:24.860761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.876117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.876144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.876156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.889709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.889747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.905628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.905655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.905666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.921024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.921050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.921067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.937700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.937726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.937738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.950154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.950181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.950193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.965942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.965970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.965982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.981615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.981642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.981654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:24.995425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:24.995450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:24.995462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:25.009239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.599 [2024-04-19 04:15:25.009267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.599 [2024-04-19 04:15:25.009279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.599 [2024-04-19 04:15:25.025322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.025353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.025366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.038363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.038399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.055016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.055045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.055057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.068174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.068200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.068211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.085594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.085621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.100832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.100858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.100870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.600 [2024-04-19 04:15:25.113823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.600 [2024-04-19 04:15:25.113849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.600 [2024-04-19 04:15:25.113862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.128698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.128737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.145497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.145522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.145534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.158565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.158591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.158603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.173367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.173394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.173406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.187372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.187398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.187410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.203215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.203241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.203253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.216524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.216551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.216562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.230911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.230939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.249365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.249391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.249403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.265374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.265400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.265412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.278311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.278337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.278356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.292692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.292719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.292730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.310520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.310565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.323146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.323172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.323184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.339840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.339867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.339879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.356922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.356948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.356959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.368979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.369006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.859 [2024-04-19 04:15:25.384947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:10.859 [2024-04-19 04:15:25.384972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.859 [2024-04-19 04:15:25.384984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.118 [2024-04-19 04:15:25.398582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.118 [2024-04-19 04:15:25.398608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.118 [2024-04-19 04:15:25.398620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.118 [2024-04-19 04:15:25.414000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.414036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.414048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.430011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.430037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.430049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.445438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.445463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.445476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.458611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.458636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.458648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.475729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.475755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.475767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.489035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.489060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.489071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.506286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.506313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.506325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.525665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.525692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.525704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.544608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.544633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.544644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.557710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.557736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.557747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.575123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.575148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.575164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.592080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.592118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.611271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.611298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.611310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.625434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.625460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.625471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.119 [2024-04-19 04:15:25.638431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.119 [2024-04-19 04:15:25.638458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.119 [2024-04-19 04:15:25.638470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.652846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.652883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.671368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.671393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.671405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.684393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.684419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.684431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.702494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.702521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.702533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.721946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.721976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.721988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.739150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.739177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.739189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.752301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.752327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.752339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.770163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.770190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.770202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.789263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.789288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.801534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.801559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.801571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.818507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.818534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.818546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.837053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.837092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.853955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.853981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.853993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.866862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.866888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.866900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.883007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.883034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.883046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.378 [2024-04-19 04:15:25.900595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.378 [2024-04-19 04:15:25.900622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.378 [2024-04-19 04:15:25.900634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:25.917499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:25.917526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:25.917538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:25.932056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:25.932081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:25.932093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:25.944878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:25.944903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:25.944914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:25.963909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:25.963935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:25.963947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:25.983753] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:25.983778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:25.983790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:26.001757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:26.001783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.637 [2024-04-19 04:15:26.001798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.637 [2024-04-19 04:15:26.018415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.637 [2024-04-19 04:15:26.018441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.018453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.031106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.031142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.047518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.047557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.063794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.063819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.063831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.076577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.076602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.076614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.095105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.095131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.095144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.114210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.114235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.114248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.131529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.131554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.131567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.144504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.144537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.144550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.638 [2024-04-19 04:15:26.163364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.638 [2024-04-19 04:15:26.163397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.638 [2024-04-19 04:15:26.163409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.176163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.176189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.195136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.195163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.195174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.207618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.207645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.226234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.226262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.226273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.242081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.242108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.242120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.258033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.258058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.271370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.271395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.271407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.285122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.285147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.285159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.300430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.300456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.300468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.316866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.316893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.316905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.331805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.331832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.331844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.346259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.346284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.346296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.360843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.360869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.360881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.378529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.378556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.378568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.391810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.391835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.897 [2024-04-19 04:15:26.391847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.897 [2024-04-19 04:15:26.409719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.897 [2024-04-19 04:15:26.409745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.898 [2024-04-19 04:15:26.409762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.898 [2024-04-19 04:15:26.422284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:11.898 [2024-04-19 04:15:26.422310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.898 [2024-04-19 04:15:26.422322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.156 [2024-04-19 04:15:26.438432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.156 [2024-04-19 04:15:26.438459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.156 [2024-04-19 04:15:26.438470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.156 [2024-04-19 04:15:26.456208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.156 [2024-04-19 04:15:26.456235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.156 [2024-04-19 04:15:26.456247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.156 [2024-04-19 04:15:26.468773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.156 [2024-04-19 04:15:26.468798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.156 [2024-04-19 04:15:26.468810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.156 [2024-04-19 04:15:26.487340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.156 [2024-04-19 04:15:26.487372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.156 [2024-04-19 04:15:26.487384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.156 [2024-04-19 04:15:26.506022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.156 [2024-04-19 04:15:26.506049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.156 [2024-04-19 04:15:26.506062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.157 [2024-04-19 04:15:26.522487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd44ff0) 00:24:12.157 [2024-04-19 04:15:26.522512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.157 [2024-04-19 04:15:26.522525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:12.157 00:24:12.157 Latency(us) 00:24:12.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.157 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:12.157 nvme0n1 : 2.05 15985.21 62.44 0.00 0.00 7836.90 3932.16 50045.67 00:24:12.157 =================================================================================================================== 00:24:12.157 Total : 15985.21 62.44 0.00 0.00 7836.90 3932.16 50045.67 00:24:12.157 0 00:24:12.157 04:15:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:12.157 04:15:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:12.157 04:15:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:12.157 04:15:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:12.157 | .driver_specific 00:24:12.157 | .nvme_error 00:24:12.157 | .status_code 00:24:12.157 | .command_transient_transport_error' 00:24:12.415 04:15:26 -- host/digest.sh@71 -- # (( 128 > 0 )) 00:24:12.415 04:15:26 -- host/digest.sh@73 -- # killprocess 3952754 00:24:12.415 04:15:26 -- common/autotest_common.sh@936 -- # '[' -z 3952754 ']' 00:24:12.415 04:15:26 -- common/autotest_common.sh@940 -- # kill -0 3952754 00:24:12.415 04:15:26 -- common/autotest_common.sh@941 -- # uname 00:24:12.415 04:15:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:12.415 04:15:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3952754 00:24:12.415 04:15:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:12.415 04:15:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:12.415 04:15:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3952754' 00:24:12.415 killing process with pid 3952754 00:24:12.415 04:15:26 -- common/autotest_common.sh@955 -- # kill 3952754 00:24:12.415 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.415 00:24:12.415 Latency(us) 00:24:12.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.415 =================================================================================================================== 00:24:12.415 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.415 04:15:26 -- common/autotest_common.sh@960 -- # wait 3952754 00:24:12.673 04:15:27 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:12.673 04:15:27 -- host/digest.sh@54 -- # local rw bs qd 00:24:12.673 04:15:27 -- host/digest.sh@56 -- # rw=randread 00:24:12.673 04:15:27 -- host/digest.sh@56 -- # bs=131072 00:24:12.673 04:15:27 -- host/digest.sh@56 -- # qd=16 00:24:12.673 04:15:27 -- host/digest.sh@58 -- # bperfpid=3953472 00:24:12.673 04:15:27 -- host/digest.sh@60 -- # waitforlisten 3953472 /var/tmp/bperf.sock 00:24:12.673 04:15:27 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:12.673 04:15:27 -- common/autotest_common.sh@817 -- # '[' -z 3953472 ']' 00:24:12.673 04:15:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.673 04:15:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:12.673 04:15:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.673 04:15:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:12.673 04:15:27 -- common/autotest_common.sh@10 -- # set +x 00:24:12.673 [2024-04-19 04:15:27.154475] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:12.673 [2024-04-19 04:15:27.154535] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953472 ] 00:24:12.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.673 Zero copy mechanism will not be used. 00:24:12.673 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.931 [2024-04-19 04:15:27.228375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.931 [2024-04-19 04:15:27.318369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.931 04:15:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.931 04:15:27 -- common/autotest_common.sh@850 -- # return 0 00:24:12.931 04:15:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.931 04:15:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:13.190 04:15:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:13.190 04:15:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.190 04:15:27 -- common/autotest_common.sh@10 -- # set +x 00:24:13.190 04:15:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.190 04:15:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.190 04:15:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.756 nvme0n1 00:24:13.756 04:15:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:13.756 04:15:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.756 04:15:28 -- common/autotest_common.sh@10 -- # set +x 00:24:13.756 04:15:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.756 04:15:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:13.756 04:15:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.756 Zero copy mechanism will not be used. 00:24:13.756 Running I/O for 2 seconds... 00:24:13.756 [2024-04-19 04:15:28.230572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.230613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.230629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.756 [2024-04-19 04:15:28.239271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.239303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.239315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:13.756 [2024-04-19 04:15:28.248932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.248962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.248975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:13.756 [2024-04-19 04:15:28.258594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.258623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.258636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.756 [2024-04-19 04:15:28.267556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.267584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.267596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.756 [2024-04-19 04:15:28.276119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:13.756 [2024-04-19 04:15:28.276148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.756 [2024-04-19 04:15:28.276165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.015 [2024-04-19 04:15:28.284359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.015 [2024-04-19 04:15:28.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.015 [2024-04-19 04:15:28.284397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.015 [2024-04-19 04:15:28.292647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.015 [2024-04-19 04:15:28.292675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.015 [2024-04-19 04:15:28.292687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.015 [2024-04-19 04:15:28.300464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.015 [2024-04-19 04:15:28.300492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.300504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.308959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.308987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.308999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.317318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.317353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.317366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.325901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.325929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.325941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.335293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.335334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.344813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.344840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.344853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.354131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.354165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.354178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.363642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.363670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.363682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.372223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.372250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.372262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.381065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.381093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.389972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.389999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.390011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.398809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.398837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.398849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.407557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.407585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.407596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.415953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.415993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.424668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.424696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.424708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.434163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.434191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.445038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.445067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.445080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.456056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.456084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.456097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.467323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.467358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.467371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.477771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.477811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.486730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.486758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.486770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.493214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.493243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.493255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.504990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.505019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.505032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.515548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.515594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.525860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.525889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.525901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.016 [2024-04-19 04:15:28.536291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.016 [2024-04-19 04:15:28.536319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.016 [2024-04-19 04:15:28.536332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.546150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.546179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.546191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.556466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.556496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.556508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.566703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.566731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.566743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.577659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.577687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.577700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.587223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.587251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.597390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.597416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.597429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.607274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.607319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.616994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.617022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.617035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.626258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.626286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.626299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.635516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.635544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.635555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.644633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.644660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.644673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.653277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.653304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.653316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.661363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.661389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.661401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.669137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.669164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.669176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.676815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.676842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.676854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.684458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.684484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.684496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.692194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.692220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.692231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.699772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.699799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.699811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.707301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.707328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.707340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.714849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.714875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.714886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.722571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.722598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.722610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.730124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.730150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.737650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.737676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.276 [2024-04-19 04:15:28.737688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.276 [2024-04-19 04:15:28.745248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.276 [2024-04-19 04:15:28.745275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.745291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.753027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.753053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.753065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.760558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.760585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.760597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.768107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.768133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.768145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.775749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.775775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.775787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.783474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.783500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.783512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.790967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.790993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.791005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.277 [2024-04-19 04:15:28.798444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.277 [2024-04-19 04:15:28.798470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.277 [2024-04-19 04:15:28.798482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.805882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.805908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.805924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.813364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.813395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.821071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.821098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.821109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.828638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.828665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.828677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.836082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.836107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.836119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.843617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.843643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.843654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.851418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.851444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.851456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.858966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.858992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.859003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.866374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.866400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.866413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.873898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.873937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.881660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.881687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.881699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.889267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.889294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.889305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.896769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.896796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.896807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.904305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.904331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.904349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.911900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.911939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.919702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.536 [2024-04-19 04:15:28.919727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.536 [2024-04-19 04:15:28.919739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.536 [2024-04-19 04:15:28.927220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.927247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.927259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.934751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.934780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.934791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.942436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.942463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.942480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.950077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.950104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.950115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.957633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.957660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.957671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.965357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.965383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.965395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.973404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.973431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.973442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.981094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.981121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.981132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.988700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.988727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.988738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:28.996451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:28.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:28.996489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.004163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.004188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.004200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.011781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.011812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.011824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.019395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.019421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.019432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.027064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.027090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.027102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.034579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.034606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.034618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.042099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.042127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.042138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.049713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.049740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.049752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.537 [2024-04-19 04:15:29.057527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.537 [2024-04-19 04:15:29.057555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.537 [2024-04-19 04:15:29.057567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.796 [2024-04-19 04:15:29.065042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.796 [2024-04-19 04:15:29.065068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.796 [2024-04-19 04:15:29.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.796 [2024-04-19 04:15:29.072559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.796 [2024-04-19 04:15:29.072586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.796 [2024-04-19 04:15:29.072598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.796 [2024-04-19 04:15:29.080117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.796 [2024-04-19 04:15:29.080144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.080156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.087766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.087793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.087804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.095222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.095249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.095260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.102638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.102665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.102677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.110064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.110092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.110104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.117613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.117639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.117652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.125406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.125444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.132887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.132914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.132926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.140128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.140155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.140172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.147670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.147697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.155415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.155441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.155453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.163113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.163140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.163152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.170671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.170698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.178229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.178256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.178268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.186024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.186052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.186064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.193725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.193752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.193763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.201364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.201392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.201403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.209263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.209296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.209309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.217810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.217839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.217852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.226622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.226652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.226665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.236208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.236237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.236250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.246510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.246538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.246551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.256542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.256577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.256589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.265695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.265722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.265735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.275019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.275046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.275058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.283828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.283855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.283867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.292356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.292384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.797 [2024-04-19 04:15:29.292396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.797 [2024-04-19 04:15:29.300745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.797 [2024-04-19 04:15:29.300772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.798 [2024-04-19 04:15:29.300784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:14.798 [2024-04-19 04:15:29.309318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.798 [2024-04-19 04:15:29.309356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.798 [2024-04-19 04:15:29.309370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:14.798 [2024-04-19 04:15:29.318184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:14.798 [2024-04-19 04:15:29.318212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.798 [2024-04-19 04:15:29.318225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.056 [2024-04-19 04:15:29.326712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.056 [2024-04-19 04:15:29.326739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.056 [2024-04-19 04:15:29.326751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.056 [2024-04-19 04:15:29.335305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.056 [2024-04-19 04:15:29.335332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.335350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.344036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.344063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.344075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.353135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.353164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.361305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.361332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.361355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.370634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.370662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.370675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.380930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.380970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.390570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.390598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.390610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.399801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.399829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.408523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.408562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.417269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.417297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.417309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.426164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.426192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.426204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.434659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.434687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.434699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.443921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.443949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.443961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.453485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.453513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.453525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.462154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.462182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.462193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.470426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.470453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.470464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.478880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.478908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.478920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.487608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.487635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.487647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.496164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.057 [2024-04-19 04:15:29.496192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.057 [2024-04-19 04:15:29.496205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.057 [2024-04-19 04:15:29.504713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.504741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.504753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.513949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.513977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.513994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.522400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.522429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.522441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.530622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.530650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.530661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.538623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.538651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.538662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.546572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.546600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.546612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.554359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.554388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.554400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.561976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.562004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.562016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.569553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.569579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.569592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.058 [2024-04-19 04:15:29.578759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.058 [2024-04-19 04:15:29.578787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.058 [2024-04-19 04:15:29.578800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.586630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.586662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.586674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.594542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.594570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.594582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.602357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.602384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.602395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.609928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.609955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.609967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.617723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.617750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.617761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.625498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.625525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.625537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.633229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.633256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.633268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.640872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.640899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.640911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.648911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.648948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.656664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.656692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.656703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.665845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.665886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.673803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.681993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.682031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.690456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.690484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.690496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.698793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.698820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.698832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.706865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.706893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.706905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.714870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.714897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.714909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.724134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.724162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.724178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.734012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.734039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.734052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.743766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.743794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.318 [2024-04-19 04:15:29.743806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.318 [2024-04-19 04:15:29.753822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.318 [2024-04-19 04:15:29.753850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.753862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.763962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.763988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.764000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.773645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.773673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.773684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.782809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.782836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.782848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.792217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.792243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.792255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.802570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.802598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.802611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.812160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.812192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.812205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.821507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.821534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.821547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.830613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.830640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.830652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.319 [2024-04-19 04:15:29.841496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.319 [2024-04-19 04:15:29.841523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.319 [2024-04-19 04:15:29.841535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.851652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.851691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.858139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.858165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.858177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.866001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.866029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.866041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.875757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.875785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.886236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.886264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.886277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.895838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.895866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.905592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.905620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.905633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.915300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.915330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.915349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.925812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.925841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.925853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.937081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.937110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.937122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.947908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.947950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.958134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.958163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.958175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.969936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.969964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.969976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.980628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.980657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.578 [2024-04-19 04:15:29.980677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.578 [2024-04-19 04:15:29.991935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.578 [2024-04-19 04:15:29.991964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:29.991978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.002266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.002294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.002307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.011780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.011811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.011824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.021553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.021582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.021596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.032121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.032277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.032326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.043417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.043446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.043459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.053555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.053582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.053594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.062889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.062917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.062930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.071686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.071722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.071735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.081471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.081500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.081512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.091444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.091470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.091482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.579 [2024-04-19 04:15:30.101003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.579 [2024-04-19 04:15:30.101031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.579 [2024-04-19 04:15:30.101043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.837 [2024-04-19 04:15:30.110777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.837 [2024-04-19 04:15:30.110805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.837 [2024-04-19 04:15:30.110817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.837 [2024-04-19 04:15:30.120539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.837 [2024-04-19 04:15:30.120567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.837 [2024-04-19 04:15:30.120579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.837 [2024-04-19 04:15:30.130157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.837 [2024-04-19 04:15:30.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.837 [2024-04-19 04:15:30.130196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.837 [2024-04-19 04:15:30.139251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.139278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.139289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.147991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.148017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.148030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.156719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.156746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.167161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.167189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.177022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.177049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.177061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.186635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.186662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.186673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.195942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.195968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.195980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.204549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.204575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.204587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.212715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.212743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.212755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:15.838 [2024-04-19 04:15:30.220722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x814ec0) 00:24:15.838 [2024-04-19 04:15:30.220750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.838 [2024-04-19 04:15:30.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:15.838 00:24:15.838 Latency(us) 00:24:15.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.838 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:15.838 nvme0n1 : 2.00 3570.51 446.31 0.00 0.00 4477.30 975.59 11856.06 00:24:15.838 =================================================================================================================== 00:24:15.838 Total : 3570.51 446.31 0.00 0.00 4477.30 975.59 11856.06 00:24:15.838 0 00:24:15.838 04:15:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:15.838 04:15:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:15.838 04:15:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:15.838 | .driver_specific 00:24:15.838 | .nvme_error 00:24:15.838 | .status_code 00:24:15.838 | .command_transient_transport_error' 00:24:15.838 04:15:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:16.097 04:15:30 -- host/digest.sh@71 -- # (( 230 > 0 )) 00:24:16.097 04:15:30 -- host/digest.sh@73 -- # killprocess 3953472 00:24:16.097 04:15:30 -- common/autotest_common.sh@936 -- # '[' -z 3953472 ']' 00:24:16.097 04:15:30 -- common/autotest_common.sh@940 -- # kill -0 3953472 00:24:16.097 04:15:30 -- common/autotest_common.sh@941 -- # uname 00:24:16.097 04:15:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.097 04:15:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3953472 00:24:16.097 04:15:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:16.097 04:15:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:16.097 04:15:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3953472' 00:24:16.097 killing process with pid 3953472 00:24:16.097 04:15:30 -- common/autotest_common.sh@955 -- # kill 3953472 00:24:16.097 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.097 00:24:16.097 Latency(us) 00:24:16.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.097 =================================================================================================================== 00:24:16.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.097 04:15:30 -- common/autotest_common.sh@960 -- # wait 3953472 00:24:16.357 04:15:30 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:16.357 04:15:30 -- host/digest.sh@54 -- # local rw bs qd 00:24:16.357 04:15:30 -- host/digest.sh@56 -- # rw=randwrite 00:24:16.357 04:15:30 -- host/digest.sh@56 -- # bs=4096 00:24:16.357 04:15:30 -- host/digest.sh@56 -- # qd=128 00:24:16.357 04:15:30 -- host/digest.sh@58 -- # bperfpid=3954042 00:24:16.357 04:15:30 -- host/digest.sh@60 -- # waitforlisten 3954042 /var/tmp/bperf.sock 00:24:16.357 04:15:30 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:16.357 04:15:30 -- common/autotest_common.sh@817 -- # '[' -z 3954042 ']' 00:24:16.357 04:15:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.357 04:15:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:16.357 04:15:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.357 04:15:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:16.357 04:15:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.357 [2024-04-19 04:15:30.811108] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:16.357 [2024-04-19 04:15:30.811166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954042 ] 00:24:16.357 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.615 [2024-04-19 04:15:30.884095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.615 [2024-04-19 04:15:30.973810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.615 04:15:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.615 04:15:31 -- common/autotest_common.sh@850 -- # return 0 00:24:16.615 04:15:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.615 04:15:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:16.874 04:15:31 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:16.874 04:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.874 04:15:31 -- common/autotest_common.sh@10 -- # set +x 00:24:16.874 04:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.874 04:15:31 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:16.874 04:15:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.132 nvme0n1 00:24:17.132 04:15:31 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:17.132 04:15:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.132 04:15:31 -- common/autotest_common.sh@10 -- # set +x 00:24:17.132 04:15:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.132 04:15:31 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:17.132 04:15:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.390 Running I/O for 2 seconds... 00:24:17.390 [2024-04-19 04:15:31.775779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.776021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.776056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.790506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.790736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.790763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.805272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.805536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.820078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.820309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.820336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.834881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.835102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.835128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.849661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.849886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.849915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.864429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.864657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.864681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.879172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.879400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.879425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.893979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.894205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.894230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.390 [2024-04-19 04:15:31.908744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.390 [2024-04-19 04:15:31.908970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.390 [2024-04-19 04:15:31.908995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.923529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.923752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.923777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.938284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.938519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.938543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.953038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.953262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.953286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.967795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.968018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.982536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.982769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.982794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:31.997326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:31.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:31.997583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.012218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.012449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.012474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.026976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.027201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.027225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.041727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.041954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.041977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.056433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.056656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.056680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.071212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.071445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.071469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.085951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.086173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.086196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.100701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.100928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.100951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.115661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.115914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.130405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.130630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.130654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.145124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.145351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.145375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.159857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.160084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.160107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.649 [2024-04-19 04:15:32.174587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.649 [2024-04-19 04:15:32.174813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.649 [2024-04-19 04:15:32.174837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.189358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.908 [2024-04-19 04:15:32.189579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-04-19 04:15:32.189602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.204134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.908 [2024-04-19 04:15:32.204361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-04-19 04:15:32.204386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.218903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.908 [2024-04-19 04:15:32.219129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-04-19 04:15:32.219154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.233704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.908 [2024-04-19 04:15:32.233928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-04-19 04:15:32.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.248455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.908 [2024-04-19 04:15:32.248681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.908 [2024-04-19 04:15:32.248706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.908 [2024-04-19 04:15:32.263189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.263423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.263447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.277993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.278216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.278245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.292771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.292996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.293021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.307568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.307795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.307818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.322336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.322568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.322592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.337147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.337370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.337395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.351887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.352110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.366707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.366935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.366958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.381494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.381713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.381737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.396279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.396514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.396538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.411049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.411272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.411296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.909 [2024-04-19 04:15:32.425837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:17.909 [2024-04-19 04:15:32.426061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.909 [2024-04-19 04:15:32.426085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.440645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.440869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.440892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.455399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.455625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.455648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.470168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.470389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.470413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.484946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.485171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.485195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.499750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.499975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.499999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.167 [2024-04-19 04:15:32.514512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.167 [2024-04-19 04:15:32.514736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.167 [2024-04-19 04:15:32.514760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.529299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.529532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.529557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.544063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.544287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.544310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.558839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.559061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.559084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.573608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.573830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.573853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.588362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.588613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.603425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.603653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.618190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.618423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.618452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.632977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.633199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.633223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.647746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.647970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.647994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.662529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.662752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.662775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.677292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.677521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.677545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.168 [2024-04-19 04:15:32.692059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.168 [2024-04-19 04:15:32.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.168 [2024-04-19 04:15:32.692306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.706839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.707064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.707086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.721595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.721820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.721844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.736390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.736612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.736635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.751125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.751362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.751386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.765907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.766129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.766152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.780641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.780863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.780887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.795427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.795651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.795675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.810182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.810405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.810430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.824952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.825176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.825200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.839737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.839959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.854511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.854738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.854763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.869236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.869468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.884012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.884236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.884259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.898769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.898994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.899017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.913547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.913770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.928276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.928505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.928529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.428 [2024-04-19 04:15:32.943087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.428 [2024-04-19 04:15:32.943310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.428 [2024-04-19 04:15:32.943334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:32.957830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:32.958052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:32.958076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:32.972623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:32.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:32.972868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:32.987369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:32.987592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:32.987615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.002170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.002413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.002437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.017007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.017233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.017258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.031832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.032058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.032082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.046582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.046806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.046830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.061350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.061575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.061599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.076149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.076370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.076400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.090916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.091142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.091166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.105681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.105905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.105928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.120451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.120677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.120701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.135236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.135465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.135493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.150025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.150250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.150274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.164799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.165022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.165045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.179543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.179767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.179790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.194310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.194539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.194563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.708 [2024-04-19 04:15:33.209052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.708 [2024-04-19 04:15:33.209275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.708 [2024-04-19 04:15:33.209298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.223825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.224049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.224073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.238613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.238838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.238861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.253374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.253599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.253623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.268136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.268363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.268386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.282907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.283131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.283154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.297682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.297905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.297928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.312423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.312649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.327160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.327383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.327407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.341905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.342128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.342151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.356705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.356930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.356954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.371435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.371683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.386180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.386402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.386425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.400932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.401156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.401181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.415671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.415892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.415916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.430410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.430636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.430660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.445142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.445362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.445386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.459905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.460126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.460150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.474662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.474887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.474911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:18.975 [2024-04-19 04:15:33.489409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:18.975 [2024-04-19 04:15:33.489634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.975 [2024-04-19 04:15:33.489657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.504149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.504371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.504396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.518895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.519146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.533664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.533890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.533913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.548401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.548623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.548647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.563146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.563370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.563394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.577907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.578131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.578154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.592621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.592844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.592868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.607655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.607881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.607904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.622375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.622602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.622626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.637170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.637394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.637417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.651915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.652142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.652165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.666682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.666907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.681449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.681674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.681697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.696234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.696465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.696490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.711024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.711247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.711271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.725811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.726034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.726059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.740613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.235 [2024-04-19 04:15:33.740859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.235 [2024-04-19 04:15:33.755377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23271f0) with pdu=0x2000190fef90 00:24:19.235 [2024-04-19 04:15:33.755604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.236 [2024-04-19 04:15:33.755627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:19.495 00:24:19.495 Latency(us) 00:24:19.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.495 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:19.495 nvme0n1 : 2.01 17276.94 67.49 0.00 0.00 7391.12 3768.32 15073.28 00:24:19.495 =================================================================================================================== 00:24:19.495 Total : 17276.94 67.49 0.00 0.00 7391.12 3768.32 15073.28 00:24:19.495 0 00:24:19.495 04:15:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:19.495 04:15:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:19.495 04:15:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:19.495 | .driver_specific 00:24:19.495 | .nvme_error 00:24:19.495 | .status_code 00:24:19.495 | .command_transient_transport_error' 00:24:19.495 04:15:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:19.754 04:15:34 -- host/digest.sh@71 -- # (( 135 > 0 )) 00:24:19.754 04:15:34 -- host/digest.sh@73 -- # killprocess 3954042 00:24:19.754 04:15:34 -- common/autotest_common.sh@936 -- # '[' -z 3954042 ']' 00:24:19.754 04:15:34 -- common/autotest_common.sh@940 -- # kill -0 3954042 00:24:19.754 04:15:34 -- common/autotest_common.sh@941 -- # uname 00:24:19.754 04:15:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:19.754 04:15:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3954042 00:24:19.754 04:15:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:19.754 04:15:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:19.754 04:15:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3954042' 00:24:19.754 killing process with pid 3954042 00:24:19.754 04:15:34 -- common/autotest_common.sh@955 -- # kill 3954042 00:24:19.754 Received shutdown signal, test time was about 2.000000 seconds 00:24:19.754 00:24:19.754 Latency(us) 00:24:19.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.754 =================================================================================================================== 00:24:19.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.754 04:15:34 -- common/autotest_common.sh@960 -- # wait 3954042 00:24:20.014 04:15:34 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:20.014 04:15:34 -- host/digest.sh@54 -- # local rw bs qd 00:24:20.014 04:15:34 -- host/digest.sh@56 -- # rw=randwrite 00:24:20.014 04:15:34 -- host/digest.sh@56 -- # bs=131072 00:24:20.014 04:15:34 -- host/digest.sh@56 -- # qd=16 00:24:20.014 04:15:34 -- host/digest.sh@58 -- # bperfpid=3954804 00:24:20.014 04:15:34 -- host/digest.sh@60 -- # waitforlisten 3954804 /var/tmp/bperf.sock 00:24:20.014 04:15:34 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:20.014 04:15:34 -- common/autotest_common.sh@817 -- # '[' -z 3954804 ']' 00:24:20.014 04:15:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.014 04:15:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:20.014 04:15:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.014 04:15:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:20.014 04:15:34 -- common/autotest_common.sh@10 -- # set +x 00:24:20.014 [2024-04-19 04:15:34.361328] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:20.014 [2024-04-19 04:15:34.361403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954804 ] 00:24:20.014 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.014 Zero copy mechanism will not be used. 00:24:20.014 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.014 [2024-04-19 04:15:34.433819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.014 [2024-04-19 04:15:34.523291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.273 04:15:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.273 04:15:34 -- common/autotest_common.sh@850 -- # return 0 00:24:20.273 04:15:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.273 04:15:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.532 04:15:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.532 04:15:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.532 04:15:34 -- common/autotest_common.sh@10 -- # set +x 00:24:20.532 04:15:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.532 04:15:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.532 04:15:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.791 nvme0n1 00:24:21.051 04:15:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:21.051 04:15:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.051 04:15:35 -- common/autotest_common.sh@10 -- # set +x 00:24:21.051 04:15:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.051 04:15:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:21.051 04:15:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:21.051 Zero copy mechanism will not be used. 00:24:21.051 Running I/O for 2 seconds... 00:24:21.051 [2024-04-19 04:15:35.457054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.457553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.457588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.465824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.466276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.466304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.474269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.474723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.474751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.482438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.482528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.482554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.491864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.492334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.492365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.501623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.502089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.502120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.511930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.512380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.512406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.521885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.522359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.522385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.531418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.531896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.531921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.540532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.540706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.540730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.550217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.550687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.550712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.559750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.560219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.560244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.568806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.051 [2024-04-19 04:15:35.569263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.051 [2024-04-19 04:15:35.569287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.051 [2024-04-19 04:15:35.577403] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.311 [2024-04-19 04:15:35.577866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.311 [2024-04-19 04:15:35.577891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.311 [2024-04-19 04:15:35.585383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.311 [2024-04-19 04:15:35.585502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.311 [2024-04-19 04:15:35.585527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.592815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.593234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.593259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.599179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.599579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.599605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.605393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.605803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.605828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.612914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.620698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.629611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.630049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.630073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.637860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.638259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.638283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.644710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.645095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.645119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.651727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.652124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.652148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.658629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.659027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.659052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.666761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.667160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.667184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.674064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.674466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.674491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.681495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.681891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.690709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.691105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.691129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.698649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.699045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.699069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.706807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.707243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.707267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.714926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.715322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.715357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.723515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.723912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.723936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.730981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.731384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.731410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.738740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.739148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.739173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.746386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.746797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.746821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.754054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.754459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.754483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.761895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.762285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.762309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.769264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.312 [2024-04-19 04:15:35.769685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.312 [2024-04-19 04:15:35.769709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.312 [2024-04-19 04:15:35.776864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.777261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.784032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.784457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.791841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.792258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.792282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.799950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.800361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.800386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.808089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.808495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.808520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.815177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.815580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.815606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.822456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.822869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.313 [2024-04-19 04:15:35.831160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.313 [2024-04-19 04:15:35.831564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.313 [2024-04-19 04:15:35.831589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.838863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.839275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.847199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.847606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.847635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.855459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.855875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.864891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.865286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.865310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.872640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.873059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.873083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.880793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.881201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.889005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.889412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.889437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.897242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.897649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.897673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.904836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.905273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.913457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.913859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.913884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.921246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.921676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.921700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.573 [2024-04-19 04:15:35.929387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.573 [2024-04-19 04:15:35.929791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.573 [2024-04-19 04:15:35.929815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.938055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.938460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.938484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.946389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.946792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.946816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.953883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.954360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.961277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.961681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.961705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.968160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.968560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.968584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.974759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.975153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.975177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.981990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.982381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.982405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.989309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.989691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.989715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:35.996540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:35.996936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:35.996960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.002968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.003367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.003392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.010827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.011287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.011312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.020494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.020932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.020956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.028999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.029392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.029417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.036105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.036501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.036526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.042400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.042799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.042823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.048816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.049207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.049236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.054953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.055373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.060941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.061349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.061374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.067051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.067449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.067474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.073015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.073413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.073438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.079030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.079441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.079467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.085333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.085749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.085773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.091999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.092398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.092423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.574 [2024-04-19 04:15:36.098165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.574 [2024-04-19 04:15:36.098579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.574 [2024-04-19 04:15:36.098604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.104513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.104918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.104943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.110966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.111396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.111422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.117228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.117630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.117656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.125177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.125641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.133155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.133562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.133586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.140542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.140940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.140965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.147303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.147730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.147754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.153731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.154142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.159974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.160367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.160391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.166208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.166614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.166639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.172718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.173131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.173155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.179419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.179815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.179839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.186956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.187353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.187377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.195490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.195932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.203090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.203501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.203526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.209963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.210362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.210386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.217747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.218186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.225601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.225990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.226021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.235411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.235881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.235906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.244445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.244847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.244871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.252655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.253083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.253107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.260645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.261025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.261049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.268876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.269273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.269297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.277000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.835 [2024-04-19 04:15:36.277414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.835 [2024-04-19 04:15:36.277439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.835 [2024-04-19 04:15:36.285156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.285561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.285585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.292962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.293362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.293386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.300641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.301031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.301055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.308200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.308605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.308629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.316646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.317042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.317067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.323929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.324329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.324359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.330484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.330880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.330905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.336779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.337164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.337188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.342977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.343380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.343404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.349333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.349736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.349760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.836 [2024-04-19 04:15:36.355437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:21.836 [2024-04-19 04:15:36.355838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.836 [2024-04-19 04:15:36.355865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.361914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.362305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.362329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.368632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.369033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.369057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.375144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.375546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.375571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.382658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.383057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.389163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.389564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.395231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.395625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.395650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.401221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.401622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.401646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.407184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.407581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.407605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.413194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.413582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.096 [2024-04-19 04:15:36.413608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.096 [2024-04-19 04:15:36.419177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.096 [2024-04-19 04:15:36.419559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.425187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.425613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.431108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.431505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.431530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.436984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.437380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.437404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.442958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.443340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.443372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.448824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.449211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.449235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.454708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.455103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.455126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.460690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.461087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.461111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.466650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.467042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.467067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.472673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.473051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.473075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.478603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.479029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.484595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.484990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.485014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.490554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.490953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.490977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.496656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.497033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.497057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.502586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.502988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.503012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.508548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.508947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.508970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.514506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.514905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.514934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.520463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.520862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.520886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.526418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.526819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.526844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.532310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.532717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.532741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.538248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.538640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.538664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.544223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.544610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.544635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.550136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.550537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.550561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.556066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.556462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.556486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.562028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.562428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.562452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.568156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.568548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.568573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.574909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.575359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.097 [2024-04-19 04:15:36.575384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.097 [2024-04-19 04:15:36.582834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.097 [2024-04-19 04:15:36.583275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.098 [2024-04-19 04:15:36.583299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.098 [2024-04-19 04:15:36.590532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.098 [2024-04-19 04:15:36.590929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.098 [2024-04-19 04:15:36.590954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.098 [2024-04-19 04:15:36.598445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.098 [2024-04-19 04:15:36.598942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.098 [2024-04-19 04:15:36.598966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.098 [2024-04-19 04:15:36.607068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.098 [2024-04-19 04:15:36.607529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.098 [2024-04-19 04:15:36.607552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.098 [2024-04-19 04:15:36.615011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.098 [2024-04-19 04:15:36.615463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.098 [2024-04-19 04:15:36.615488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.623664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.624181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.624205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.632215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.632647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.641502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.641962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.641986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.650524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.651071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.651094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.659250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.659737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.659761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.668036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.668445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.668476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.675293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.675733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.675757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.683170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.683602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.683626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.690805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.691207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.691231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.697445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.697848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.697872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.703589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.703982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.704011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.709602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.710016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.716553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.716937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.716962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.723785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.724190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.724214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.730446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.730842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.730866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.736493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.736885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.736909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.742386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.742770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.742795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.748202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.748603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.748627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.754109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.760548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.760947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.760971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.767546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.767930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.767955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.774716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.775113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.775137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.781612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.782010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.782035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.787760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.358 [2024-04-19 04:15:36.788148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.358 [2024-04-19 04:15:36.788173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.358 [2024-04-19 04:15:36.793993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.794388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.794412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.800023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.800425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.800450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.806073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.806476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.806500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.813441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.813835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.813860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.820070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.820461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.820486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.826119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.826524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.826549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.832148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.832546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.832570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.838163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.838547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.838572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.844487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.844893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.844917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.851638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.852036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.852060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.858036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.858456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.864035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.864435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.864459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.870060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.870462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.876081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.876479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.876504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.359 [2024-04-19 04:15:36.882506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.359 [2024-04-19 04:15:36.882901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.359 [2024-04-19 04:15:36.882925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.889936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.890325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.890357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.896162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.896559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.896584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.902213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.902615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.902639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.908162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.908561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.908585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.914060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.914451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.914477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.919992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.920375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.920399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.925897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.926297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.926322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.931848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.932244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.937788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.938187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.938211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.943698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.944082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.944107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.949900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.950326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.955799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.956182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.956208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.961674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.962074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.962098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.967650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.968049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.619 [2024-04-19 04:15:36.968074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.619 [2024-04-19 04:15:36.973587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.619 [2024-04-19 04:15:36.973984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:36.974013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:36.979782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:36.980177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:36.980202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:36.985773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:36.986174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:36.986198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:36.991696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:36.992085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:36.992110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:36.997634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:36.998026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:36.998050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.003575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.003963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.003987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.009514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.009913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.009938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.015579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.021459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.021851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.021876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.027363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.027751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.027775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.033293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.033691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.033716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.039217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.039610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.039635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.045086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.045466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.045491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.051068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.051465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.051489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.057416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.057812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.057836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.064813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.065211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.065237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.072288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.072672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.072697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.080045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.080433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.080458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.087382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.087781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.087805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.095188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.095580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.095604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.102529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.102928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.102953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.110225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.110629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.110654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.117839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.118243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.118268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.125450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.125834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.125858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.133168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.133571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.133596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.620 [2024-04-19 04:15:37.140558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.620 [2024-04-19 04:15:37.140974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.620 [2024-04-19 04:15:37.140999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.880 [2024-04-19 04:15:37.148069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.880 [2024-04-19 04:15:37.148465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.148493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.154695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.155086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.155110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.162579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.163040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.163065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.171551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.171945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.171970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.179955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.180437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.180462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.188782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.189275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.189298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.197509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.198012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.198036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.205973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.206357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.206382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.214306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.214740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.214764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.222518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.222968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.222992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.229771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.230166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.230191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.236823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.237211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.237234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.244189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.244587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.244613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.250335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.250738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.250763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.256434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.256829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.256854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.262495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.262890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.262915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.268562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.268946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.268971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.275960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.276366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.276390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.283439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.283837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.283861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.290648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.291038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.291062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.296737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.297127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.297150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.302738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.303119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.303143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.308791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.309194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.309218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.314889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.315273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.315298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.320875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.321278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.326847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.327250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.327274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.332732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.881 [2024-04-19 04:15:37.333130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.881 [2024-04-19 04:15:37.333159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.881 [2024-04-19 04:15:37.338672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.339064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.339089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.344577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.344970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.344994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.350494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.350890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.350914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.356840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.357226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.357250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.364789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.365296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.365321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.372721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.373117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.379818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.380260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.380284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.387700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.388118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.388142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.395445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.395931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.882 [2024-04-19 04:15:37.402545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:22.882 [2024-04-19 04:15:37.402938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.882 [2024-04-19 04:15:37.402962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.141 [2024-04-19 04:15:37.409118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.141 [2024-04-19 04:15:37.409508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.141 [2024-04-19 04:15:37.409533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.141 [2024-04-19 04:15:37.415944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.141 [2024-04-19 04:15:37.416374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.141 [2024-04-19 04:15:37.416400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.141 [2024-04-19 04:15:37.423475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.141 [2024-04-19 04:15:37.423878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.141 [2024-04-19 04:15:37.423902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.141 [2024-04-19 04:15:37.431545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.141 [2024-04-19 04:15:37.432063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.141 [2024-04-19 04:15:37.432087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.141 [2024-04-19 04:15:37.440402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.142 [2024-04-19 04:15:37.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.142 [2024-04-19 04:15:37.440821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.142 [2024-04-19 04:15:37.447667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2327530) with pdu=0x2000190fef90 00:24:23.142 [2024-04-19 04:15:37.447887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.142 [2024-04-19 04:15:37.447910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.142 00:24:23.142 Latency(us) 00:24:23.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.142 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:23.142 nvme0n1 : 2.00 4351.02 543.88 0.00 0.00 3670.20 2815.07 11617.75 00:24:23.142 =================================================================================================================== 00:24:23.142 Total : 4351.02 543.88 0.00 0.00 3670.20 2815.07 11617.75 00:24:23.142 0 00:24:23.142 04:15:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:23.142 04:15:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:23.142 04:15:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:23.142 04:15:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:23.142 | .driver_specific 00:24:23.142 | .nvme_error 00:24:23.142 | .status_code 00:24:23.142 | .command_transient_transport_error' 00:24:23.142 04:15:37 -- host/digest.sh@71 -- # (( 281 > 0 )) 00:24:23.142 04:15:37 -- host/digest.sh@73 -- # killprocess 3954804 00:24:23.142 04:15:37 -- common/autotest_common.sh@936 -- # '[' -z 3954804 ']' 00:24:23.142 04:15:37 -- common/autotest_common.sh@940 -- # kill -0 3954804 00:24:23.142 04:15:37 -- common/autotest_common.sh@941 -- # uname 00:24:23.142 04:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.142 04:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3954804 00:24:23.401 04:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:23.401 04:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:23.401 04:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3954804' 00:24:23.401 killing process with pid 3954804 00:24:23.401 04:15:37 -- common/autotest_common.sh@955 -- # kill 3954804 00:24:23.401 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.401 00:24:23.401 Latency(us) 00:24:23.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.401 =================================================================================================================== 00:24:23.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.401 04:15:37 -- common/autotest_common.sh@960 -- # wait 3954804 00:24:23.401 04:15:37 -- host/digest.sh@116 -- # killprocess 3952659 00:24:23.401 04:15:37 -- common/autotest_common.sh@936 -- # '[' -z 3952659 ']' 00:24:23.401 04:15:37 -- common/autotest_common.sh@940 -- # kill -0 3952659 00:24:23.401 04:15:37 -- common/autotest_common.sh@941 -- # uname 00:24:23.401 04:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.401 04:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3952659 00:24:23.661 04:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.661 04:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.661 04:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3952659' 00:24:23.661 killing process with pid 3952659 00:24:23.661 04:15:37 -- common/autotest_common.sh@955 -- # kill 3952659 00:24:23.661 04:15:37 -- common/autotest_common.sh@960 -- # wait 3952659 00:24:23.661 00:24:23.661 real 0m15.247s 00:24:23.661 user 0m30.404s 00:24:23.661 sys 0m4.073s 00:24:23.661 04:15:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:23.661 04:15:38 -- common/autotest_common.sh@10 -- # set +x 00:24:23.661 ************************************ 00:24:23.661 END TEST nvmf_digest_error 00:24:23.661 ************************************ 00:24:23.919 04:15:38 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:23.919 04:15:38 -- host/digest.sh@150 -- # nvmftestfini 00:24:23.919 04:15:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:23.919 04:15:38 -- nvmf/common.sh@117 -- # sync 00:24:23.919 04:15:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.919 04:15:38 -- nvmf/common.sh@120 -- # set +e 00:24:23.919 04:15:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.919 04:15:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.919 rmmod nvme_tcp 00:24:23.919 rmmod nvme_fabrics 00:24:23.919 rmmod nvme_keyring 00:24:23.919 04:15:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.919 04:15:38 -- nvmf/common.sh@124 -- # set -e 00:24:23.919 04:15:38 -- nvmf/common.sh@125 -- # return 0 00:24:23.919 04:15:38 -- nvmf/common.sh@478 -- # '[' -n 3952659 ']' 00:24:23.919 04:15:38 -- nvmf/common.sh@479 -- # killprocess 3952659 00:24:23.919 04:15:38 -- common/autotest_common.sh@936 -- # '[' -z 3952659 ']' 00:24:23.919 04:15:38 -- common/autotest_common.sh@940 -- # kill -0 3952659 00:24:23.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3952659) - No such process 00:24:23.919 04:15:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3952659 is not found' 00:24:23.919 Process with pid 3952659 is not found 00:24:23.919 04:15:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:23.919 04:15:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:23.919 04:15:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:23.919 04:15:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.919 04:15:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.919 04:15:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.919 04:15:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.919 04:15:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.450 04:15:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:26.450 00:24:26.450 real 0m39.624s 00:24:26.450 user 1m3.620s 00:24:26.450 sys 0m12.768s 00:24:26.450 04:15:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:26.450 04:15:40 -- common/autotest_common.sh@10 -- # set +x 00:24:26.450 ************************************ 00:24:26.450 END TEST nvmf_digest 00:24:26.450 ************************************ 00:24:26.450 04:15:40 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:24:26.450 04:15:40 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:24:26.450 04:15:40 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:24:26.450 04:15:40 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:26.450 04:15:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:26.450 04:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.450 04:15:40 -- common/autotest_common.sh@10 -- # set +x 00:24:26.450 ************************************ 00:24:26.450 START TEST nvmf_bdevperf 00:24:26.450 ************************************ 00:24:26.450 04:15:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:26.450 * Looking for test storage... 00:24:26.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.450 04:15:40 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.450 04:15:40 -- nvmf/common.sh@7 -- # uname -s 00:24:26.450 04:15:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.450 04:15:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.450 04:15:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.450 04:15:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.450 04:15:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.450 04:15:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.450 04:15:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.450 04:15:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.450 04:15:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.450 04:15:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.450 04:15:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:26.450 04:15:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:26.450 04:15:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.450 04:15:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.450 04:15:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.450 04:15:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.450 04:15:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.450 04:15:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.450 04:15:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.451 04:15:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.451 04:15:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.451 04:15:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.451 04:15:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.451 04:15:40 -- paths/export.sh@5 -- # export PATH 00:24:26.451 04:15:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.451 04:15:40 -- nvmf/common.sh@47 -- # : 0 00:24:26.451 04:15:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.451 04:15:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.451 04:15:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.451 04:15:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.451 04:15:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.451 04:15:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.451 04:15:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.451 04:15:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.451 04:15:40 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.451 04:15:40 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.451 04:15:40 -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:26.451 04:15:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:26.451 04:15:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.451 04:15:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:26.451 04:15:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:26.451 04:15:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:26.451 04:15:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.451 04:15:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.451 04:15:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.451 04:15:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:26.451 04:15:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:26.451 04:15:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.451 04:15:40 -- common/autotest_common.sh@10 -- # set +x 00:24:31.715 04:15:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:31.715 04:15:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.715 04:15:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.715 04:15:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.715 04:15:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.715 04:15:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.715 04:15:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.715 04:15:46 -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.715 04:15:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.715 04:15:46 -- nvmf/common.sh@296 -- # e810=() 00:24:31.715 04:15:46 -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.715 04:15:46 -- nvmf/common.sh@297 -- # x722=() 00:24:31.715 04:15:46 -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.715 04:15:46 -- nvmf/common.sh@298 -- # mlx=() 00:24:31.715 04:15:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.715 04:15:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.715 04:15:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.715 04:15:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.715 04:15:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.715 04:15:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.715 04:15:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.715 04:15:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.715 04:15:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.715 04:15:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.715 04:15:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.715 04:15:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.715 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.715 04:15:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.715 04:15:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.715 04:15:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.715 04:15:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.715 04:15:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.715 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.715 04:15:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.715 04:15:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:31.715 04:15:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:31.715 04:15:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:31.715 04:15:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.715 04:15:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.715 04:15:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.715 04:15:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.715 04:15:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.715 04:15:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.715 04:15:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.715 04:15:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.715 04:15:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.715 04:15:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.715 04:15:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.715 04:15:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.715 04:15:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.715 04:15:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.715 04:15:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.715 04:15:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.715 04:15:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.974 04:15:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.974 04:15:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.974 04:15:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:24:31.974 00:24:31.974 --- 10.0.0.2 ping statistics --- 00:24:31.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.974 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:31.974 04:15:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:24:31.974 00:24:31.974 --- 10.0.0.1 ping statistics --- 00:24:31.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.974 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:31.974 04:15:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.974 04:15:46 -- nvmf/common.sh@411 -- # return 0 00:24:31.974 04:15:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:31.974 04:15:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.974 04:15:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:31.974 04:15:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:31.974 04:15:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.974 04:15:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:31.974 04:15:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:31.974 04:15:46 -- host/bdevperf.sh@25 -- # tgt_init 00:24:31.974 04:15:46 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:31.974 04:15:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:31.974 04:15:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:31.974 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:31.974 04:15:46 -- nvmf/common.sh@470 -- # nvmfpid=3959077 00:24:31.974 04:15:46 -- nvmf/common.sh@471 -- # waitforlisten 3959077 00:24:31.974 04:15:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.974 04:15:46 -- common/autotest_common.sh@817 -- # '[' -z 3959077 ']' 00:24:31.974 04:15:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.974 04:15:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:31.974 04:15:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.974 04:15:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:31.974 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:31.974 [2024-04-19 04:15:46.393240] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:31.974 [2024-04-19 04:15:46.393297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.974 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.974 [2024-04-19 04:15:46.472623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.232 [2024-04-19 04:15:46.559185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.232 [2024-04-19 04:15:46.559233] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.232 [2024-04-19 04:15:46.559244] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.232 [2024-04-19 04:15:46.559253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.232 [2024-04-19 04:15:46.559261] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.232 [2024-04-19 04:15:46.559372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.232 [2024-04-19 04:15:46.559820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.232 [2024-04-19 04:15:46.559823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.232 04:15:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:32.232 04:15:46 -- common/autotest_common.sh@850 -- # return 0 00:24:32.232 04:15:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:32.232 04:15:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.232 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.232 04:15:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.232 04:15:46 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.232 04:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.232 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.232 [2024-04-19 04:15:46.703704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.232 04:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.232 04:15:46 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.232 04:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.232 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.232 Malloc0 00:24:32.232 04:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.232 04:15:46 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.232 04:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.232 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.490 04:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.490 04:15:46 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.490 04:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.490 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.490 04:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.490 04:15:46 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.490 04:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.490 04:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:32.490 [2024-04-19 04:15:46.775403] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.490 04:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.490 04:15:46 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:32.490 04:15:46 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:32.490 04:15:46 -- nvmf/common.sh@521 -- # config=() 00:24:32.490 04:15:46 -- nvmf/common.sh@521 -- # local subsystem config 00:24:32.490 04:15:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:32.490 04:15:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:32.490 { 00:24:32.490 "params": { 00:24:32.490 "name": "Nvme$subsystem", 00:24:32.490 "trtype": "$TEST_TRANSPORT", 00:24:32.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:32.490 "adrfam": "ipv4", 00:24:32.490 "trsvcid": "$NVMF_PORT", 00:24:32.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:32.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:32.490 "hdgst": ${hdgst:-false}, 00:24:32.490 "ddgst": ${ddgst:-false} 00:24:32.490 }, 00:24:32.490 "method": "bdev_nvme_attach_controller" 00:24:32.490 } 00:24:32.490 EOF 00:24:32.490 )") 00:24:32.490 04:15:46 -- nvmf/common.sh@543 -- # cat 00:24:32.490 04:15:46 -- nvmf/common.sh@545 -- # jq . 00:24:32.490 04:15:46 -- nvmf/common.sh@546 -- # IFS=, 00:24:32.490 04:15:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:32.490 "params": { 00:24:32.490 "name": "Nvme1", 00:24:32.490 "trtype": "tcp", 00:24:32.490 "traddr": "10.0.0.2", 00:24:32.490 "adrfam": "ipv4", 00:24:32.490 "trsvcid": "4420", 00:24:32.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.490 "hdgst": false, 00:24:32.490 "ddgst": false 00:24:32.490 }, 00:24:32.490 "method": "bdev_nvme_attach_controller" 00:24:32.490 }' 00:24:32.490 [2024-04-19 04:15:46.828014] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:32.490 [2024-04-19 04:15:46.828068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959103 ] 00:24:32.490 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.490 [2024-04-19 04:15:46.907276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.490 [2024-04-19 04:15:46.992702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.055 Running I/O for 1 seconds... 00:24:33.986 00:24:33.986 Latency(us) 00:24:33.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:33.986 Verification LBA range: start 0x0 length 0x4000 00:24:33.986 Nvme1n1 : 1.01 7163.45 27.98 0.00 0.00 17786.71 4021.53 16443.58 00:24:33.986 =================================================================================================================== 00:24:33.986 Total : 7163.45 27.98 0.00 0.00 17786.71 4021.53 16443.58 00:24:34.244 04:15:48 -- host/bdevperf.sh@30 -- # bdevperfpid=3959377 00:24:34.244 04:15:48 -- host/bdevperf.sh@32 -- # sleep 3 00:24:34.244 04:15:48 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:34.244 04:15:48 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:34.244 04:15:48 -- nvmf/common.sh@521 -- # config=() 00:24:34.244 04:15:48 -- nvmf/common.sh@521 -- # local subsystem config 00:24:34.244 04:15:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:34.244 04:15:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:34.244 { 00:24:34.244 "params": { 00:24:34.244 "name": "Nvme$subsystem", 00:24:34.244 "trtype": "$TEST_TRANSPORT", 00:24:34.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.244 "adrfam": "ipv4", 00:24:34.244 "trsvcid": "$NVMF_PORT", 00:24:34.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.244 "hdgst": ${hdgst:-false}, 00:24:34.244 "ddgst": ${ddgst:-false} 00:24:34.244 }, 00:24:34.244 "method": "bdev_nvme_attach_controller" 00:24:34.244 } 00:24:34.244 EOF 00:24:34.244 )") 00:24:34.244 04:15:48 -- nvmf/common.sh@543 -- # cat 00:24:34.244 04:15:48 -- nvmf/common.sh@545 -- # jq . 00:24:34.244 04:15:48 -- nvmf/common.sh@546 -- # IFS=, 00:24:34.244 04:15:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:34.244 "params": { 00:24:34.244 "name": "Nvme1", 00:24:34.244 "trtype": "tcp", 00:24:34.244 "traddr": "10.0.0.2", 00:24:34.244 "adrfam": "ipv4", 00:24:34.244 "trsvcid": "4420", 00:24:34.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.244 "hdgst": false, 00:24:34.244 "ddgst": false 00:24:34.244 }, 00:24:34.244 "method": "bdev_nvme_attach_controller" 00:24:34.244 }' 00:24:34.244 [2024-04-19 04:15:48.589009] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:34.244 [2024-04-19 04:15:48.589054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959377 ] 00:24:34.244 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.244 [2024-04-19 04:15:48.658713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.244 [2024-04-19 04:15:48.739978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.501 Running I/O for 15 seconds... 00:24:37.782 04:15:51 -- host/bdevperf.sh@33 -- # kill -9 3959077 00:24:37.782 04:15:51 -- host/bdevperf.sh@35 -- # sleep 3 00:24:37.782 [2024-04-19 04:15:51.562943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.782 [2024-04-19 04:15:51.563807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.782 [2024-04-19 04:15:51.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.563978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.563988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.783 [2024-04-19 04:15:51.564674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.783 [2024-04-19 04:15:51.564684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.784 [2024-04-19 04:15:51.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.784 [2024-04-19 04:15:51.564898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.564984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.564995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.784 [2024-04-19 04:15:51.565520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.784 [2024-04-19 04:15:51.565533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.785 [2024-04-19 04:15:51.565542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.785 [2024-04-19 04:15:51.565563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.785 [2024-04-19 04:15:51.565882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.785 [2024-04-19 04:15:51.565903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.565914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3f900 is same with the state(5) to be set 00:24:37.785 [2024-04-19 04:15:51.565925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.785 [2024-04-19 04:15:51.565933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.785 [2024-04-19 04:15:51.565941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:8 PRP1 0x0 PRP2 0x0 00:24:37.785 [2024-04-19 04:15:51.565955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.566004] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa3f900 was disconnected and freed. reset controller. 00:24:37.785 [2024-04-19 04:15:51.566059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.785 [2024-04-19 04:15:51.566072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.566082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.785 [2024-04-19 04:15:51.566092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.566102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.785 [2024-04-19 04:15:51.566111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.566121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.785 [2024-04-19 04:15:51.566131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.785 [2024-04-19 04:15:51.566139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.785 [2024-04-19 04:15:51.570324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.785 [2024-04-19 04:15:51.570358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.785 [2024-04-19 04:15:51.571146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.571417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.571432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.785 [2024-04-19 04:15:51.571442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.785 [2024-04-19 04:15:51.571704] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.785 [2024-04-19 04:15:51.571966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.785 [2024-04-19 04:15:51.571977] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.785 [2024-04-19 04:15:51.571986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.785 [2024-04-19 04:15:51.576202] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.785 [2024-04-19 04:15:51.585137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.785 [2024-04-19 04:15:51.585730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.585999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.586031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.785 [2024-04-19 04:15:51.586053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.785 [2024-04-19 04:15:51.586641] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.785 [2024-04-19 04:15:51.586981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.785 [2024-04-19 04:15:51.586992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.785 [2024-04-19 04:15:51.587007] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.785 [2024-04-19 04:15:51.591219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.785 [2024-04-19 04:15:51.599919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.785 [2024-04-19 04:15:51.600456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.600771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.600803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.785 [2024-04-19 04:15:51.600824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.785 [2024-04-19 04:15:51.601416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.785 [2024-04-19 04:15:51.601724] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.785 [2024-04-19 04:15:51.601735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.785 [2024-04-19 04:15:51.601745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.785 [2024-04-19 04:15:51.605958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.785 [2024-04-19 04:15:51.614664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.785 [2024-04-19 04:15:51.615217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.615413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.785 [2024-04-19 04:15:51.615445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.615468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.616031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.616293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.616305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.616314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.620540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.629240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.629824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.630193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.630223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.630245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.630823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.631087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.631098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.631107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.635320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.643762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.644231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.644511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.644544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.644567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.645140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.645727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.645752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.645772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.650037] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.658506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.659043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.659233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.659263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.659285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.659878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.660197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.660208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.660217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.664429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.673111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.673660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.673947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.673962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.673972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.674233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.674503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.674515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.674524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.678728] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.687654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.688156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.688407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.688423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.688433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.688694] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.688957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.688969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.688978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.693185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.702372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.702916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.703115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.703129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.703139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.703409] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.703673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.703684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.703693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.707902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.717073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.717632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.717950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.717964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.786 [2024-04-19 04:15:51.717974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.786 [2024-04-19 04:15:51.718237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.786 [2024-04-19 04:15:51.718508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.786 [2024-04-19 04:15:51.718521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.786 [2024-04-19 04:15:51.718530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.786 [2024-04-19 04:15:51.722730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.786 [2024-04-19 04:15:51.731657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.786 [2024-04-19 04:15:51.732219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.786 [2024-04-19 04:15:51.732601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.732634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.732655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.733054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.733317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.733329] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.733337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.737551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.746231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.746778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.747062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.747091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.747112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.747699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.748250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.748265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.748278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.754069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.761378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.761930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.762248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.762278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.762299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.762886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.763273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.763285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.763294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.767507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.775944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.776507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.776774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.776804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.776832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.777116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.777385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.777398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.777407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.781612] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.790540] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.791042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.791329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.791375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.791396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.791969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.792258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.792270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.792279] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.796504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.805208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.805702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.805948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.805964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.805974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.806237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.806508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.806521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.806530] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.810741] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.819942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.820432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.820706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.820721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.820730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.820996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.821259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.821272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.821282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.825506] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.834462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.834947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.835178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.835192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.835202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.835475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.835739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.835751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.835761] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.839992] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.849185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.849682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.849882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.849896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.849906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.850168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.787 [2024-04-19 04:15:51.850436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.787 [2024-04-19 04:15:51.850448] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.787 [2024-04-19 04:15:51.850458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.787 [2024-04-19 04:15:51.854674] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.787 [2024-04-19 04:15:51.863914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.787 [2024-04-19 04:15:51.864449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.864650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.787 [2024-04-19 04:15:51.864665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.787 [2024-04-19 04:15:51.864674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.787 [2024-04-19 04:15:51.864937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.865203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.865215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.865224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.869446] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.878632] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.879198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.879511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.879545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.879565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.880147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.880416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.880428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.880437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.884646] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.893365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.893833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.894109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.894123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.894133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.894404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.894666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.894678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.894687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.898892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.908094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.908649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.908892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.908922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.908943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.909528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.909817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.909833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.909842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.914047] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.922744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.923221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.923494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.923525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.923546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.924120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.924656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.924669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.924678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.928894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.937335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.937750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.937946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.937960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.937969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.938230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.938499] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.938511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.938520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.942730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.951927] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.952502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.952774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.952789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.952798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.953059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.953321] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.953332] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.953353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.957567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.966509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.966923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.967242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.967257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.967287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.967863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.968127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.968139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.968149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.972371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.981065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.981552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.981748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.981762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.981772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.982034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.982296] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.982307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.982316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:51.986528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.788 [2024-04-19 04:15:51.995733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.788 [2024-04-19 04:15:51.996228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.996551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.788 [2024-04-19 04:15:51.996586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.788 [2024-04-19 04:15:51.996607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.788 [2024-04-19 04:15:51.996985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.788 [2024-04-19 04:15:51.997248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.788 [2024-04-19 04:15:51.997259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.788 [2024-04-19 04:15:51.997269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.788 [2024-04-19 04:15:52.001485] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.010435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.010981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.011181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.011195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.011205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.011473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.011735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.011747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.011756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.015967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.025168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.025591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.025865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.025880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.025890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.026151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.026422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.026434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.026444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.030656] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.039855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.040607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.040637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.040657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.041233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.041822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.041851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.041860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.047709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.055247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.055744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.055997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.056011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.056021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.056283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.056553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.056566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.056574] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.060781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.069980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.070560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.070830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.070861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.070894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.071157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.071427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.071439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.071448] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.075661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.084666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.085216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.085462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.085477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.085486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.085749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.086011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.086023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.086032] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.090246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.099212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.099727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.099961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.099992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.100013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.100598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.100932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.100944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.100953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.105171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.113863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.114378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.114651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.114666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.114676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.114937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.115200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.115211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.115220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.119443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.128394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.128897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.129229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.129260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.789 [2024-04-19 04:15:52.129281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.789 [2024-04-19 04:15:52.129871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.789 [2024-04-19 04:15:52.130253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.789 [2024-04-19 04:15:52.130265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.789 [2024-04-19 04:15:52.130273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.789 [2024-04-19 04:15:52.134486] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.789 [2024-04-19 04:15:52.142931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.789 [2024-04-19 04:15:52.143458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.789 [2024-04-19 04:15:52.143674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.143692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.143703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.143965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.144226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.144237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.144246] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.148458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.157653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.158211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.158367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.158383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.158392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.158655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.158917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.158929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.158938] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.163149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.172347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.172843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.173206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.173236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.173258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.173842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.174401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.174418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.174430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.180229] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.187747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.188327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.188645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.188675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.188704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.189272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.189540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.189553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.189562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.193782] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.202481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.202894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.203169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.203183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.203218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.203803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.204177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.204188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.204197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.208413] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.217112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.217635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.217814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.217829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.217839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.218100] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.218369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.218382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.218391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.222597] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.231794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.232251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.232406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.232423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.232433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.232700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.232964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.232976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.232985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.237197] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.246405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.246831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.247034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.247049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.247059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.247321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.247591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.247603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.247612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.251822] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.261029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.261629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.261816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.261830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.261840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.262102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.790 [2024-04-19 04:15:52.262370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.790 [2024-04-19 04:15:52.262382] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.790 [2024-04-19 04:15:52.262391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.790 [2024-04-19 04:15:52.266609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.790 [2024-04-19 04:15:52.275544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.790 [2024-04-19 04:15:52.276072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.276297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.790 [2024-04-19 04:15:52.276327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.790 [2024-04-19 04:15:52.276359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.790 [2024-04-19 04:15:52.276932] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.791 [2024-04-19 04:15:52.277473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.791 [2024-04-19 04:15:52.277485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.791 [2024-04-19 04:15:52.277494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.791 [2024-04-19 04:15:52.281732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.791 [2024-04-19 04:15:52.290169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.791 [2024-04-19 04:15:52.290743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.791 [2024-04-19 04:15:52.291016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.791 [2024-04-19 04:15:52.291046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:37.791 [2024-04-19 04:15:52.291067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:37.791 [2024-04-19 04:15:52.291579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:37.791 [2024-04-19 04:15:52.291842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.791 [2024-04-19 04:15:52.291854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.791 [2024-04-19 04:15:52.291862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.791 [2024-04-19 04:15:52.296081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.050 [2024-04-19 04:15:52.304767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.050 [2024-04-19 04:15:52.305309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.305532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.305547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.050 [2024-04-19 04:15:52.305557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.050 [2024-04-19 04:15:52.305819] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.050 [2024-04-19 04:15:52.306081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.050 [2024-04-19 04:15:52.306092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.050 [2024-04-19 04:15:52.306100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.050 [2024-04-19 04:15:52.310310] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.050 [2024-04-19 04:15:52.319494] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.050 [2024-04-19 04:15:52.320052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.320366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.320398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.050 [2024-04-19 04:15:52.320420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.050 [2024-04-19 04:15:52.320802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.050 [2024-04-19 04:15:52.321064] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.050 [2024-04-19 04:15:52.321079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.050 [2024-04-19 04:15:52.321088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.050 [2024-04-19 04:15:52.325306] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.050 [2024-04-19 04:15:52.334018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.050 [2024-04-19 04:15:52.334621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.334805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.334835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.050 [2024-04-19 04:15:52.334856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.050 [2024-04-19 04:15:52.335441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.050 [2024-04-19 04:15:52.335854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.050 [2024-04-19 04:15:52.335870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.050 [2024-04-19 04:15:52.335882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.050 [2024-04-19 04:15:52.341692] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.050 [2024-04-19 04:15:52.349086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.050 [2024-04-19 04:15:52.349683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.349998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.050 [2024-04-19 04:15:52.350012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.050 [2024-04-19 04:15:52.350023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.050 [2024-04-19 04:15:52.350283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.350554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.350566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.350575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.354786] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.363729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.364282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.364639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.364671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.364693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.364969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.365230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.365242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.365254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.369468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.378402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.378887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.379171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.379200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.379221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.379779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.380042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.380053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.380063] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.384277] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.392964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.393503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.393625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.393640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.393649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.393912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.394174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.394185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.394194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.398418] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.407597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.408161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.408492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.408524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.408545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.408924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.409186] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.409197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.409206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.413412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.422099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.422668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.422857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.422872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.422882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.423144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.423414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.423427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.423436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.427630] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.436805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.437384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.437548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.437578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.437598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.437947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.438209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.438220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.438230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.442443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.451377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.451927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.452240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.452270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.452291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.452882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.453145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.453157] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.453166] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.457375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.051 [2024-04-19 04:15:52.466068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.051 [2024-04-19 04:15:52.466623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.466896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.051 [2024-04-19 04:15:52.466927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.051 [2024-04-19 04:15:52.466948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.051 [2024-04-19 04:15:52.467537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.051 [2024-04-19 04:15:52.467903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.051 [2024-04-19 04:15:52.467915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.051 [2024-04-19 04:15:52.467924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.051 [2024-04-19 04:15:52.472125] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.480808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.481325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.481654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.481695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.481706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.481968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.482230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.482242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.482251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.486485] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.495428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.495988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.496303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.496333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.496369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.496914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.497176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.497188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.497197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.501405] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.510096] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.510670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.510944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.510959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.510968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.511230] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.511500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.511513] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.511522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.515732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.524682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.525262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.525490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.525522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.525543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.526026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.526289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.526300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.526309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.530535] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.539247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.539738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.540015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.540029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.540038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.540299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.540569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.540581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.540591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.544799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.553995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.554480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.554727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.554742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.554755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.555017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.555280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.555291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.555300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.559511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.052 [2024-04-19 04:15:52.568708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.052 [2024-04-19 04:15:52.569256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.569508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.052 [2024-04-19 04:15:52.569524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.052 [2024-04-19 04:15:52.569534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.052 [2024-04-19 04:15:52.569796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.052 [2024-04-19 04:15:52.570058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.052 [2024-04-19 04:15:52.570070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.052 [2024-04-19 04:15:52.570079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.052 [2024-04-19 04:15:52.574289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.311 [2024-04-19 04:15:52.583225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.311 [2024-04-19 04:15:52.583683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.583957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.583972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.311 [2024-04-19 04:15:52.583982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.311 [2024-04-19 04:15:52.584243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.311 [2024-04-19 04:15:52.584511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.311 [2024-04-19 04:15:52.584523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.311 [2024-04-19 04:15:52.584532] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.311 [2024-04-19 04:15:52.588734] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.311 [2024-04-19 04:15:52.598151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.311 [2024-04-19 04:15:52.598621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.598893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.598908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.311 [2024-04-19 04:15:52.598918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.311 [2024-04-19 04:15:52.599184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.311 [2024-04-19 04:15:52.599454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.311 [2024-04-19 04:15:52.599466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.311 [2024-04-19 04:15:52.599475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.311 [2024-04-19 04:15:52.603794] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.311 [2024-04-19 04:15:52.612731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.311 [2024-04-19 04:15:52.613306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.613579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.613595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.311 [2024-04-19 04:15:52.613606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.311 [2024-04-19 04:15:52.613868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.311 [2024-04-19 04:15:52.614130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.311 [2024-04-19 04:15:52.614142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.311 [2024-04-19 04:15:52.614151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.311 [2024-04-19 04:15:52.618359] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.311 [2024-04-19 04:15:52.627286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.311 [2024-04-19 04:15:52.627762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.628037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.628051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.311 [2024-04-19 04:15:52.628061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.311 [2024-04-19 04:15:52.628323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.311 [2024-04-19 04:15:52.628592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.311 [2024-04-19 04:15:52.628604] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.311 [2024-04-19 04:15:52.628613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.311 [2024-04-19 04:15:52.632819] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.311 [2024-04-19 04:15:52.641994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.311 [2024-04-19 04:15:52.642540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.642823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.311 [2024-04-19 04:15:52.642853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.311 [2024-04-19 04:15:52.642874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.311 [2024-04-19 04:15:52.643380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.311 [2024-04-19 04:15:52.643646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.311 [2024-04-19 04:15:52.643658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.311 [2024-04-19 04:15:52.643667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.311 [2024-04-19 04:15:52.647871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.656554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.657097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.657398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.657432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.657454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.658028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.658320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.658331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.658340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.662547] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.671251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.671843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.672107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.672137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.672159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.672495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.672757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.672768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.672777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.676986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.685906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.686369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.686686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.686717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.686737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.687310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.687659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.687675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.687684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.691884] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.700605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.701168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.701368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.701384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.701394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.701656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.701918] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.701930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.701939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.706149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.715337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.715914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.716228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.716257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.716278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.716702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.716964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.716975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.716984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.721186] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.729871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.730364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.730650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.730680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.730700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.731274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.731580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.731592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.731605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.735814] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.744498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.744985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.745259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.745273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.745283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.745551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.745814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.745826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.745835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.750039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.759221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.759802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.760114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.760144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.760165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.760553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.760817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.760828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.760837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.765036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.773718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.774305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.774515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.774546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.312 [2024-04-19 04:15:52.774567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.312 [2024-04-19 04:15:52.774902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.312 [2024-04-19 04:15:52.775165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.312 [2024-04-19 04:15:52.775176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.312 [2024-04-19 04:15:52.775186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.312 [2024-04-19 04:15:52.779396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.312 [2024-04-19 04:15:52.788329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.312 [2024-04-19 04:15:52.788902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.789080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.312 [2024-04-19 04:15:52.789094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.313 [2024-04-19 04:15:52.789104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.313 [2024-04-19 04:15:52.789372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.313 [2024-04-19 04:15:52.789634] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.313 [2024-04-19 04:15:52.789646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.313 [2024-04-19 04:15:52.789655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.313 [2024-04-19 04:15:52.793859] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.313 [2024-04-19 04:15:52.803048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.313 [2024-04-19 04:15:52.803554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.803745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.803774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.313 [2024-04-19 04:15:52.803794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.313 [2024-04-19 04:15:52.804384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.313 [2024-04-19 04:15:52.804861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.313 [2024-04-19 04:15:52.804872] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.313 [2024-04-19 04:15:52.804882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.313 [2024-04-19 04:15:52.809090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.313 [2024-04-19 04:15:52.817769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.313 [2024-04-19 04:15:52.818285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.818535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.818550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.313 [2024-04-19 04:15:52.818560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.313 [2024-04-19 04:15:52.818821] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.313 [2024-04-19 04:15:52.819083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.313 [2024-04-19 04:15:52.819094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.313 [2024-04-19 04:15:52.819103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.313 [2024-04-19 04:15:52.823314] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.313 [2024-04-19 04:15:52.832504] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.313 [2024-04-19 04:15:52.833079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.833380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.313 [2024-04-19 04:15:52.833396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.313 [2024-04-19 04:15:52.833405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.313 [2024-04-19 04:15:52.833666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.313 [2024-04-19 04:15:52.833929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.313 [2024-04-19 04:15:52.833940] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.313 [2024-04-19 04:15:52.833949] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.571 [2024-04-19 04:15:52.838154] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.571 [2024-04-19 04:15:52.847100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.571 [2024-04-19 04:15:52.847675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.571 [2024-04-19 04:15:52.847950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.571 [2024-04-19 04:15:52.847979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.571 [2024-04-19 04:15:52.847999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.571 [2024-04-19 04:15:52.848585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.571 [2024-04-19 04:15:52.848883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.571 [2024-04-19 04:15:52.848894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.571 [2024-04-19 04:15:52.848903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.853105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.861790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.862368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.862653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.862682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.862703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.863137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.863530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.863548] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.863561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.869737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.876843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.877416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.877612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.877626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.877636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.877898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.878159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.878171] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.878181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.882393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.891580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.892071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.892367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.892398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.892419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.892937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.893199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.893211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.893220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.897432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.906144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.906723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.907004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.907034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.907054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.907628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.907890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.907901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.907910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.912128] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.920815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.921395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.921704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.921741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.921762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.922241] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.922512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.922524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.922534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.926737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.935425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.936001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.936312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.936341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.936378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.936953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.937215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.937226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.937236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.941449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.950123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.950690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.951002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.951033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.951053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.951641] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.951997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.952008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.952017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.956223] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.964666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.965242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.965554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.965586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.965614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.965957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.966219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.966231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.966240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.970443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.979378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.572 [2024-04-19 04:15:52.979952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.980060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.572 [2024-04-19 04:15:52.980074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.572 [2024-04-19 04:15:52.980084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.572 [2024-04-19 04:15:52.980353] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.572 [2024-04-19 04:15:52.980616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.572 [2024-04-19 04:15:52.980628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.572 [2024-04-19 04:15:52.980637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.572 [2024-04-19 04:15:52.984844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.572 [2024-04-19 04:15:52.994022] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:52.994609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:52.994922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:52.994952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:52.994973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:52.995357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:52.995620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:52.995632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:52.995641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:52.999849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.008541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.009108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.009377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.009393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.009402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.009668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.009929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.009941] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.009950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.014151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.023091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.023677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.023908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.023938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.023960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.024379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.024642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.024653] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.024662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.028871] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.037808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.038302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.038636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.038667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.038688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.039257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.039525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.039537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.039546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.043749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.052432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.053010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.053318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.053363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.053386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.053959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.054315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.054326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.054336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.058540] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.066975] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.067469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.067783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.067813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.067834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.068252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.068522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.068534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.068544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.072750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.081685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.082280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.082554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.082586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.082607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.083181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.573 [2024-04-19 04:15:53.083651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.573 [2024-04-19 04:15:53.083663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.573 [2024-04-19 04:15:53.083673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.573 [2024-04-19 04:15:53.087880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.573 [2024-04-19 04:15:53.096329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.573 [2024-04-19 04:15:53.096915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.097226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.573 [2024-04-19 04:15:53.097255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.573 [2024-04-19 04:15:53.097276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.573 [2024-04-19 04:15:53.097559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.097822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.097841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.097850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.102055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.110980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.111482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.111673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.111688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.111698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.111960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.112222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.112233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.112242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.116453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.125631] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.126200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.126454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.126487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.126508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.127082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.127492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.127504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.127513] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.131716] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.140152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.140740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.140965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.140995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.141015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.141435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.141698] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.141710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.141722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.145924] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.154861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.155417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.155730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.155761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.155781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.156303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.156572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.156584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.156593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.160800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.169482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.169930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.170180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.170195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.170204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.170472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.170735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.170746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.170755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.174955] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.184136] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.184728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.185038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.185068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.185088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.185676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.185939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.185950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.185959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.190174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.198867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.199438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.199723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.199754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.199774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.200361] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.200676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.200687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.200697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.204897] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.833 [2024-04-19 04:15:53.213587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.833 [2024-04-19 04:15:53.214168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.214475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.833 [2024-04-19 04:15:53.214508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.833 [2024-04-19 04:15:53.214529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.833 [2024-04-19 04:15:53.214918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.833 [2024-04-19 04:15:53.215180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.833 [2024-04-19 04:15:53.215191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.833 [2024-04-19 04:15:53.215200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.833 [2024-04-19 04:15:53.219409] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.228091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.228647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.228953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.228984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.229005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.229592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.229932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.229944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.229953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.234152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.242590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.243162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.243361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.243377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.243386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.243647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.243909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.243920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.243929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.248132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.257309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.257874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.258182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.258211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.258232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.258778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.259162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.259179] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.259192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.265375] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.272431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.272811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.273102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.273132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.273153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.273716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.273980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.273992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.274001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.278208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.287144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.287711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.287995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.288027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.288047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.288613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.288920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.288935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.288948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.294749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.302336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.302828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.303036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.303051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.303061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.303324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.303591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.303605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.303615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.307825] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.317023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.317488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.317712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.317727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.317737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.318000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.318262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.318273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.318282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.322495] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.331695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.332185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.332407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.332423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.332438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.332700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.332962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.332974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.332983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.337206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.834 [2024-04-19 04:15:53.346413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.834 [2024-04-19 04:15:53.346949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.347155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.834 [2024-04-19 04:15:53.347169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:38.834 [2024-04-19 04:15:53.347179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:38.834 [2024-04-19 04:15:53.347447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:38.834 [2024-04-19 04:15:53.347710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.834 [2024-04-19 04:15:53.347721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.834 [2024-04-19 04:15:53.347730] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.834 [2024-04-19 04:15:53.351939] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.095 [2024-04-19 04:15:53.361145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.095 [2024-04-19 04:15:53.361643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.361836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.361850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.095 [2024-04-19 04:15:53.361860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.095 [2024-04-19 04:15:53.362121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.095 [2024-04-19 04:15:53.362391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.095 [2024-04-19 04:15:53.362403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.095 [2024-04-19 04:15:53.362412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.095 [2024-04-19 04:15:53.366621] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.095 [2024-04-19 04:15:53.375813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.095 [2024-04-19 04:15:53.376275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.376472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.376487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.095 [2024-04-19 04:15:53.376497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.095 [2024-04-19 04:15:53.376764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.095 [2024-04-19 04:15:53.377027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.095 [2024-04-19 04:15:53.377039] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.095 [2024-04-19 04:15:53.377048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.095 [2024-04-19 04:15:53.381262] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.095 [2024-04-19 04:15:53.390471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.095 [2024-04-19 04:15:53.390934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.391156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.095 [2024-04-19 04:15:53.391170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.095 [2024-04-19 04:15:53.391180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.095 [2024-04-19 04:15:53.391449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.095 [2024-04-19 04:15:53.391712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.095 [2024-04-19 04:15:53.391724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.391733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.395957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.405152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.405828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.405843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.405852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.406114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.406383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.406395] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.406404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.410617] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.419819] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.420306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.420433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.420448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.420458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.420718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.420985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.420996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.421005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.425219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.434424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.434843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.435147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.435177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.435198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.435564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.435828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.435839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.435848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.440049] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.449002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.449472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.449687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.449702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.449711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.449972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.450235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.450246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.450255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.454473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.463665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.464220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.464405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.464437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.464458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.465032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.465402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.465426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.465435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.469644] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.478334] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.478826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.479020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.479034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.479043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.479304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.479575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.479587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.479596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.483803] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.492996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.493484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.493662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.493676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.493686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.493947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.494209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.494221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.494230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.498457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.507649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.508135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.508323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.508338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.508353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.508615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.508877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.508888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.508902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.513103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.522297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.096 [2024-04-19 04:15:53.522847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.523097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.096 [2024-04-19 04:15:53.523112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.096 [2024-04-19 04:15:53.523121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.096 [2024-04-19 04:15:53.523391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.096 [2024-04-19 04:15:53.523653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.096 [2024-04-19 04:15:53.523665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.096 [2024-04-19 04:15:53.523674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.096 [2024-04-19 04:15:53.527910] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.096 [2024-04-19 04:15:53.536877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.537392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.537623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.537653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.537673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.538247] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.538643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.538656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.538665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.542880] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.097 [2024-04-19 04:15:53.551584] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.552113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.552428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.552461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.552491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.552754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.553115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.553131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.553145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.559339] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.097 [2024-04-19 04:15:53.566745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.567283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.567527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.567559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.567579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.568153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.568483] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.568495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.568504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.572714] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.097 [2024-04-19 04:15:53.581417] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.581962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.582222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.582236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.582246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.582514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.582775] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.582786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.582796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.587006] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.097 [2024-04-19 04:15:53.595961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.596504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.596755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.596770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.596779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.597040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.597302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.597313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.597322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.601785] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.097 [2024-04-19 04:15:53.610506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.097 [2024-04-19 04:15:53.610934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.611238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.097 [2024-04-19 04:15:53.611252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.097 [2024-04-19 04:15:53.611262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.097 [2024-04-19 04:15:53.611533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.097 [2024-04-19 04:15:53.611796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.097 [2024-04-19 04:15:53.611808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.097 [2024-04-19 04:15:53.611817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.097 [2024-04-19 04:15:53.616023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.625229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.625705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.625849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.625863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.625873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.626134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.626402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.626414] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.626423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.630633] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.639956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.640508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.640706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.640720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.640730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.640993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.641255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.641266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.641276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.645502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.654708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.655255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.655501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.655533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.655554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.656126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.656712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.656737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.656757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.661039] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.669249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.669813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.670085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.670115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.670136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.670723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.671086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.671098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.671107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.675307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.683761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.684302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.684553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.684569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.684578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.684840] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.685102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.685113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.685122] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.689326] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.698261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.698848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.699231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.699268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.699290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.699861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.700124] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.700136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.700146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.704357] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.712814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.713368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.713626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.713655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.391 [2024-04-19 04:15:53.713676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.391 [2024-04-19 04:15:53.714073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.391 [2024-04-19 04:15:53.714335] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.391 [2024-04-19 04:15:53.714355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.391 [2024-04-19 04:15:53.714365] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.391 [2024-04-19 04:15:53.718573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.391 [2024-04-19 04:15:53.727520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.391 [2024-04-19 04:15:53.728020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.391 [2024-04-19 04:15:53.728294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.728308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.728318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.728588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.728850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.728861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.728871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.733078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.742061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.742545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.742745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.742759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.742776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.743038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.743301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.743312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.743321] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.747546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.756745] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.757302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.757639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.757670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.757691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.758050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.758312] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.758324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.758333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.762544] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.771488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.772072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.772322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.772363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.772385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.772947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.773208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.773220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.773229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.777432] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.786110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.786628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.786853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.786868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.786877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.787146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.787416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.787428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.787437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.791639] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.800833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.801371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.801696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.801726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.801747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.802237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.802503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.802515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.802524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.806737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.815429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.815971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.816247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.816277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.816298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.816887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.817367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.817379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.817389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.821596] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.830027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.830571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.830837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.830851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.830861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.831123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.831396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.831409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.831418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.835622] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.844567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.845060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.845369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.845400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.392 [2024-04-19 04:15:53.845422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.392 [2024-04-19 04:15:53.845995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.392 [2024-04-19 04:15:53.846418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.392 [2024-04-19 04:15:53.846430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.392 [2024-04-19 04:15:53.846439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.392 [2024-04-19 04:15:53.850646] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.392 [2024-04-19 04:15:53.859075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.392 [2024-04-19 04:15:53.859623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.859801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.392 [2024-04-19 04:15:53.859816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.393 [2024-04-19 04:15:53.859826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.393 [2024-04-19 04:15:53.860088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.393 [2024-04-19 04:15:53.860357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.393 [2024-04-19 04:15:53.860370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.393 [2024-04-19 04:15:53.860378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.393 [2024-04-19 04:15:53.864575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.393 [2024-04-19 04:15:53.873759] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.393 [2024-04-19 04:15:53.874308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.393 [2024-04-19 04:15:53.874561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.393 [2024-04-19 04:15:53.874576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.393 [2024-04-19 04:15:53.874586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.393 [2024-04-19 04:15:53.874848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.393 [2024-04-19 04:15:53.875110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.393 [2024-04-19 04:15:53.875125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.393 [2024-04-19 04:15:53.875134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.393 [2024-04-19 04:15:53.879338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.393 [2024-04-19 04:15:53.888278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.393 [2024-04-19 04:15:53.888838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.393 [2024-04-19 04:15:53.889148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.393 [2024-04-19 04:15:53.889178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.393 [2024-04-19 04:15:53.889199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.393 [2024-04-19 04:15:53.889562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.393 [2024-04-19 04:15:53.889824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.393 [2024-04-19 04:15:53.889836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.393 [2024-04-19 04:15:53.889845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.393 [2024-04-19 04:15:53.894042] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.653 [2024-04-19 04:15:53.902992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.653 [2024-04-19 04:15:53.903538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.903813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.903827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.653 [2024-04-19 04:15:53.903837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.653 [2024-04-19 04:15:53.904099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.653 [2024-04-19 04:15:53.904368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.653 [2024-04-19 04:15:53.904380] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.653 [2024-04-19 04:15:53.904390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.653 [2024-04-19 04:15:53.908595] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.653 [2024-04-19 04:15:53.917533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.653 [2024-04-19 04:15:53.918091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.918407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.918439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.653 [2024-04-19 04:15:53.918460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.653 [2024-04-19 04:15:53.919034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.653 [2024-04-19 04:15:53.919358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.653 [2024-04-19 04:15:53.919370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.653 [2024-04-19 04:15:53.919384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.653 [2024-04-19 04:15:53.923589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.653 [2024-04-19 04:15:53.932030] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.653 [2024-04-19 04:15:53.932571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.932892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.932923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.653 [2024-04-19 04:15:53.932943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.653 [2024-04-19 04:15:53.933545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.653 [2024-04-19 04:15:53.933809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.653 [2024-04-19 04:15:53.933822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.653 [2024-04-19 04:15:53.933831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.653 [2024-04-19 04:15:53.938042] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.653 [2024-04-19 04:15:53.946758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.653 [2024-04-19 04:15:53.947309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.947630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.653 [2024-04-19 04:15:53.947662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.653 [2024-04-19 04:15:53.947682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.653 [2024-04-19 04:15:53.948144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.653 [2024-04-19 04:15:53.948538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.653 [2024-04-19 04:15:53.948556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.653 [2024-04-19 04:15:53.948569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.653 [2024-04-19 04:15:53.954737] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.653 [2024-04-19 04:15:53.961695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:53.962250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.962564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.962597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:53.962617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:53.963192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:53.963529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:53.963541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:53.963550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:53.967755] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:53.976182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:53.976724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.977001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.977016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:53.977026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:53.977287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:53.977558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:53.977570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:53.977579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:53.981781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:53.990717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:53.991268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.991445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:53.991478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:53.991499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:53.991849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:53.992111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:53.992122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:53.992131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:53.996338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.005276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.005843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.006153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.006184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.006204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.006639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.006901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.006913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.006922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.011121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.019803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.020370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.020683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.020713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.020734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.021309] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.021855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.021867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.021876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.026076] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.034515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.035030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.035296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.035311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.035321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.035592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.035855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.035866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.035875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.040080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.049025] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.049565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.049836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.049851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.049860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.050121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.050391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.050403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.050413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.054609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.063593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.064161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.064475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.064508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.064533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.064795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.065057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.065068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.065077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.069283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.078217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.078681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.078928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.078943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.078953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.654 [2024-04-19 04:15:54.079214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.654 [2024-04-19 04:15:54.079485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.654 [2024-04-19 04:15:54.079497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.654 [2024-04-19 04:15:54.079506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.654 [2024-04-19 04:15:54.083713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.654 [2024-04-19 04:15:54.092898] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.654 [2024-04-19 04:15:54.093374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.093650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.654 [2024-04-19 04:15:54.093664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.654 [2024-04-19 04:15:54.093674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.093936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.094198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.094210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.094219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.098425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.655 [2024-04-19 04:15:54.107611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.655 [2024-04-19 04:15:54.108155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.108352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.108368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.655 [2024-04-19 04:15:54.108382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.108643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.108906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.108918] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.108927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.113135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.655 [2024-04-19 04:15:54.122320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.655 [2024-04-19 04:15:54.122869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.123136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.123150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.655 [2024-04-19 04:15:54.123160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.123430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.123693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.123705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.123714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.127920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.655 [2024-04-19 04:15:54.136853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.655 [2024-04-19 04:15:54.137408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.137641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.137672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.655 [2024-04-19 04:15:54.137694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.138069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.138331] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.138348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.138358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.142559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.655 [2024-04-19 04:15:54.151521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.655 [2024-04-19 04:15:54.152040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.152317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.152332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.655 [2024-04-19 04:15:54.152341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.152616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.152880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.152891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.152900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.157100] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.655 [2024-04-19 04:15:54.166031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.655 [2024-04-19 04:15:54.166582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.166886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.655 [2024-04-19 04:15:54.166915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.655 [2024-04-19 04:15:54.166936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.655 [2024-04-19 04:15:54.167275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.655 [2024-04-19 04:15:54.167546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.655 [2024-04-19 04:15:54.167559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.655 [2024-04-19 04:15:54.167568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.655 [2024-04-19 04:15:54.171770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.180696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.181167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.181475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.181507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.181528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.182101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.182449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.182461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.182470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.186677] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.195359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.195906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.196220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.196251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.196270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.196866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.197238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.197249] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.197258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.201473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.209900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.210448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.210761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.210803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.210818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.211201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.211597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.211614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.211628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.217805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.225047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.225602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.225890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.225921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.225941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.226532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.227096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.227107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.227116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.231319] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.239755] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.240319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.240709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.240740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.240761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.241147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.241415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.241431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.241440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.245641] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.254330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.254872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.255144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.255174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.255196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.255783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.256126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.256137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.256146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.914 [2024-04-19 04:15:54.260355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.914 [2024-04-19 04:15:54.269034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.914 [2024-04-19 04:15:54.269572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.269849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.914 [2024-04-19 04:15:54.269879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.914 [2024-04-19 04:15:54.269899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.914 [2024-04-19 04:15:54.270403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.914 [2024-04-19 04:15:54.270666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.914 [2024-04-19 04:15:54.270677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.914 [2024-04-19 04:15:54.270687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.274899] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.283577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.284127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.284365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.284397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.284418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.284993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.285390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.285402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.285415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.289616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.298299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.298819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.299069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.299098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.299119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.299719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.300298] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.300309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.300319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.304525] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.312958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.313480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.313779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.313809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.313830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.314416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.314701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.314712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.314721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.318927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.327607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.328153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.328321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.328363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.328386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.328759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.329021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.329033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.329042] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.333253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.342188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.342730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.343002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.343016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.343026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.343287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.343558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.343570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.343579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.347781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.356722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.357197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.357423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.357441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.357451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.357714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.357978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.357989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.357998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.362217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.371429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.371983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.372230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.372260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.372282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.372742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.373005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.373017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.373027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.377241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.385940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.386468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.386742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.386756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.386766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.387028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.387290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.387302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.387312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.391526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.400488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.915 [2024-04-19 04:15:54.401082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.401356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.915 [2024-04-19 04:15:54.401372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.915 [2024-04-19 04:15:54.401382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.915 [2024-04-19 04:15:54.401643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.915 [2024-04-19 04:15:54.401906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.915 [2024-04-19 04:15:54.401917] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.915 [2024-04-19 04:15:54.401926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.915 [2024-04-19 04:15:54.406138] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.915 [2024-04-19 04:15:54.415069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.916 [2024-04-19 04:15:54.415564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.916 [2024-04-19 04:15:54.415837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.916 [2024-04-19 04:15:54.415851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.916 [2024-04-19 04:15:54.415861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.916 [2024-04-19 04:15:54.416122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.916 [2024-04-19 04:15:54.416392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.916 [2024-04-19 04:15:54.416404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.916 [2024-04-19 04:15:54.416413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.916 [2024-04-19 04:15:54.420624] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.916 [2024-04-19 04:15:54.429816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.916 [2024-04-19 04:15:54.430383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.916 [2024-04-19 04:15:54.430642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.916 [2024-04-19 04:15:54.430672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:39.916 [2024-04-19 04:15:54.430693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:39.916 [2024-04-19 04:15:54.431267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:39.916 [2024-04-19 04:15:54.431580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.916 [2024-04-19 04:15:54.431593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.916 [2024-04-19 04:15:54.431602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.916 [2024-04-19 04:15:54.435806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.175 [2024-04-19 04:15:54.444496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.175 [2024-04-19 04:15:54.445041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.445268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.445283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.175 [2024-04-19 04:15:54.445292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.175 [2024-04-19 04:15:54.445564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.175 [2024-04-19 04:15:54.445827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.175 [2024-04-19 04:15:54.445839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.175 [2024-04-19 04:15:54.445848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.175 [2024-04-19 04:15:54.450062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.175 [2024-04-19 04:15:54.459004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.175 [2024-04-19 04:15:54.459546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.459821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.459851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.175 [2024-04-19 04:15:54.459872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.175 [2024-04-19 04:15:54.460458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.175 [2024-04-19 04:15:54.460955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.175 [2024-04-19 04:15:54.460966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.175 [2024-04-19 04:15:54.460975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.175 [2024-04-19 04:15:54.465180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.175 [2024-04-19 04:15:54.473620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.175 [2024-04-19 04:15:54.474168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.474439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.474484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.175 [2024-04-19 04:15:54.474506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.175 [2024-04-19 04:15:54.474979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.175 [2024-04-19 04:15:54.475241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.175 [2024-04-19 04:15:54.475252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.175 [2024-04-19 04:15:54.475261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.175 [2024-04-19 04:15:54.479471] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.175 [2024-04-19 04:15:54.488153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.175 [2024-04-19 04:15:54.488717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.489033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.489064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.175 [2024-04-19 04:15:54.489085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.175 [2024-04-19 04:15:54.489673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.175 [2024-04-19 04:15:54.490018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.175 [2024-04-19 04:15:54.490029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.175 [2024-04-19 04:15:54.490038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.175 [2024-04-19 04:15:54.494241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.175 [2024-04-19 04:15:54.502684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.175 [2024-04-19 04:15:54.503198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.503465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.175 [2024-04-19 04:15:54.503480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.175 [2024-04-19 04:15:54.503491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.175 [2024-04-19 04:15:54.503753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.175 [2024-04-19 04:15:54.504015] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.504027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.504036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.508241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.517420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.517896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.518229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.518260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.518288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.518878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.519201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.519212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.519221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.523435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.532151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.532723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.533033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.533064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.533084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.533671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.534097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.534108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.534117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.538330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.546795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.547292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.547564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.547579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.547589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.547851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.548112] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.548124] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.548133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.552349] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3959077 Killed "${NVMF_APP[@]}" "$@" 00:24:40.176 04:15:54 -- host/bdevperf.sh@36 -- # tgt_init 00:24:40.176 04:15:54 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:40.176 04:15:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:40.176 04:15:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:40.176 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.176 [2024-04-19 04:15:54.561538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.562096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.562312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.562327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.562336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.562606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.562869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.562881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.562890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 04:15:54 -- nvmf/common.sh@470 -- # nvmfpid=3960436 00:24:40.176 04:15:54 -- nvmf/common.sh@471 -- # waitforlisten 3960436 00:24:40.176 04:15:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:40.176 04:15:54 -- common/autotest_common.sh@817 -- # '[' -z 3960436 ']' 00:24:40.176 04:15:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.176 04:15:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:40.176 04:15:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.176 [2024-04-19 04:15:54.567132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 04:15:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:40.176 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.176 [2024-04-19 04:15:54.576098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.576668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.576940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.576955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.576964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.577226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.577496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.577508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.577518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.581725] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.590678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.591247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.591520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.591536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.591546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.591808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.592074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.592085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.592094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.596308] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.605260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.605749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.606022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.606037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.606047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.606310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.606582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-04-19 04:15:54.606594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-04-19 04:15:54.606603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-04-19 04:15:54.610817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-04-19 04:15:54.614322] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:40.176 [2024-04-19 04:15:54.614385] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.176 [2024-04-19 04:15:54.620010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-04-19 04:15:54.620590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.620681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-04-19 04:15:54.620695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-04-19 04:15:54.620705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.176 [2024-04-19 04:15:54.620967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.176 [2024-04-19 04:15:54.621229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.621240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.621250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-04-19 04:15:54.625466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-04-19 04:15:54.634657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-04-19 04:15:54.635197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.635474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.635491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-04-19 04:15:54.635501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.177 [2024-04-19 04:15:54.635769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.177 [2024-04-19 04:15:54.636031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.636043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.636052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-04-19 04:15:54.640261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-04-19 04:15:54.649205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-04-19 04:15:54.649777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.649972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.649986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-04-19 04:15:54.649996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.177 [2024-04-19 04:15:54.650257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.177 [2024-04-19 04:15:54.650525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.650537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.650546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.177 [2024-04-19 04:15:54.654749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-04-19 04:15:54.663934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-04-19 04:15:54.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.664776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.664790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-04-19 04:15:54.664800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.177 [2024-04-19 04:15:54.665062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.177 [2024-04-19 04:15:54.665324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.665336] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.665351] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-04-19 04:15:54.669557] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-04-19 04:15:54.678625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-04-19 04:15:54.679170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.679444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.679461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-04-19 04:15:54.679471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.177 [2024-04-19 04:15:54.679734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.177 [2024-04-19 04:15:54.680001] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.680013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.680022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-04-19 04:15:54.684224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-04-19 04:15:54.693152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-04-19 04:15:54.693512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.177 [2024-04-19 04:15:54.693730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.694006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-04-19 04:15:54.694021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-04-19 04:15:54.694031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.177 [2024-04-19 04:15:54.694293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.177 [2024-04-19 04:15:54.694563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-04-19 04:15:54.694575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-04-19 04:15:54.694584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-04-19 04:15:54.698789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.707746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.708321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.708519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.708533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.708544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.708806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.709069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.709080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.709089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.713295] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.722487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.723056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.723331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.723354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.723364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.723627] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.723894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.723906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.723915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.728120] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.737060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.737633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.737799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.737814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.737824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.738086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.738356] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.738369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.738379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.742592] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.751791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.752353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.752538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.752552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.752562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.752825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.753088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.753099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.753108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.757315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.766510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.767080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.767358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.767374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.767384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.767646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.767910] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.767926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.767936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.772145] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.436 [2024-04-19 04:15:54.781141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.436 [2024-04-19 04:15:54.781641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.781890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.436 [2024-04-19 04:15:54.781904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.436 [2024-04-19 04:15:54.781914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.436 [2024-04-19 04:15:54.782176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.436 [2024-04-19 04:15:54.782445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.436 [2024-04-19 04:15:54.782458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.436 [2024-04-19 04:15:54.782467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.436 [2024-04-19 04:15:54.784136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.436 [2024-04-19 04:15:54.784168] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.436 [2024-04-19 04:15:54.784178] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.436 [2024-04-19 04:15:54.784187] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.437 [2024-04-19 04:15:54.784195] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.437 [2024-04-19 04:15:54.784239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.437 [2024-04-19 04:15:54.784678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.437 [2024-04-19 04:15:54.784681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.437 [2024-04-19 04:15:54.786679] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.795877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.796411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.796634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.796649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.796660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.796925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.797188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.797199] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.797208] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.801441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.810388] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.810875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.811160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.811175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.811186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.811457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.811723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.811736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.811745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.815960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.824909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.825339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.825586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.825601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.825611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.825875] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.826137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.826149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.826159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.830374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.839589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.840171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.840293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.840308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.840319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.840587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.840851] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.840864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.840873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.845083] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.854284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.854779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.854969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.854990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.855000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.855261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.855529] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.855542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.855551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.859768] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.868968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.869455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.869676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.869691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.869701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.869961] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.870224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.870236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.870245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.874464] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.883655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.884124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.884356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.884371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.884381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.884643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.884906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.884918] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.884927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 04:15:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.437 04:15:54 -- common/autotest_common.sh@850 -- # return 0 00:24:40.437 04:15:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:40.437 04:15:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:40.437 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.437 [2024-04-19 04:15:54.889135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.898329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.898823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.899049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.899063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.899074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.899336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.437 [2024-04-19 04:15:54.899616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.437 [2024-04-19 04:15:54.899628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.437 [2024-04-19 04:15:54.899637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.437 [2024-04-19 04:15:54.903848] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.437 [2024-04-19 04:15:54.913041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.437 [2024-04-19 04:15:54.913509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.913705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.437 [2024-04-19 04:15:54.913720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.437 [2024-04-19 04:15:54.913730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.437 [2024-04-19 04:15:54.913991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.438 [2024-04-19 04:15:54.914254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.438 [2024-04-19 04:15:54.914266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.438 [2024-04-19 04:15:54.914275] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.438 [2024-04-19 04:15:54.918490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.438 04:15:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.438 04:15:54 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.438 04:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.438 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.438 [2024-04-19 04:15:54.927674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.438 [2024-04-19 04:15:54.928097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.928244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.928258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.438 [2024-04-19 04:15:54.928268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.438 [2024-04-19 04:15:54.928332] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.438 [2024-04-19 04:15:54.928536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.438 [2024-04-19 04:15:54.928800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.438 [2024-04-19 04:15:54.928811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.438 [2024-04-19 04:15:54.928821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.438 [2024-04-19 04:15:54.933021] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.438 04:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.438 04:15:54 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:40.438 04:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.438 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.438 [2024-04-19 04:15:54.942213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.438 [2024-04-19 04:15:54.942600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.942726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.942741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.438 [2024-04-19 04:15:54.942751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.438 [2024-04-19 04:15:54.943013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.438 [2024-04-19 04:15:54.943274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.438 [2024-04-19 04:15:54.943286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.438 [2024-04-19 04:15:54.943295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.438 [2024-04-19 04:15:54.947509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.438 [2024-04-19 04:15:54.956944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.438 [2024-04-19 04:15:54.957510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.957681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.438 [2024-04-19 04:15:54.957696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.438 [2024-04-19 04:15:54.957705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.438 [2024-04-19 04:15:54.957967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.438 [2024-04-19 04:15:54.958229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.438 [2024-04-19 04:15:54.958241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.438 [2024-04-19 04:15:54.958250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.695 [2024-04-19 04:15:54.962467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.695 [2024-04-19 04:15:54.971660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.695 Malloc0 00:24:40.695 [2024-04-19 04:15:54.972177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.695 [2024-04-19 04:15:54.972384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.695 04:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.695 [2024-04-19 04:15:54.972401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.695 [2024-04-19 04:15:54.972411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.695 [2024-04-19 04:15:54.972674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.695 04:15:54 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.695 [2024-04-19 04:15:54.972938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.695 [2024-04-19 04:15:54.972951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.695 [2024-04-19 04:15:54.972965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.695 04:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.695 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.695 [2024-04-19 04:15:54.977171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.695 04:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.695 04:15:54 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.695 04:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.695 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.695 [2024-04-19 04:15:54.986415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.695 [2024-04-19 04:15:54.986935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.695 [2024-04-19 04:15:54.987186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.695 [2024-04-19 04:15:54.987201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80ecc0 with addr=10.0.0.2, port=4420 00:24:40.695 [2024-04-19 04:15:54.987211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ecc0 is same with the state(5) to be set 00:24:40.695 [2024-04-19 04:15:54.987479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80ecc0 (9): Bad file descriptor 00:24:40.695 [2024-04-19 04:15:54.987742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.695 [2024-04-19 04:15:54.987754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.695 [2024-04-19 04:15:54.987763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.695 04:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.695 [2024-04-19 04:15:54.991971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.695 04:15:54 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.695 04:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.695 04:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:40.695 [2024-04-19 04:15:54.995460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.695 04:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.695 04:15:54 -- host/bdevperf.sh@38 -- # wait 3959377 00:24:40.695 [2024-04-19 04:15:55.000926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.695 [2024-04-19 04:15:55.160587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.652 00:24:50.652 Latency(us) 00:24:50.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.652 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.652 Verification LBA range: start 0x0 length 0x4000 00:24:50.652 Nvme1n1 : 15.00 5500.10 21.48 7210.86 0.00 10039.18 629.29 18707.55 00:24:50.652 =================================================================================================================== 00:24:50.652 Total : 5500.10 21.48 7210.86 0.00 10039.18 629.29 18707.55 00:24:50.652 04:16:04 -- host/bdevperf.sh@39 -- # sync 00:24:50.652 04:16:04 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.652 04:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.652 04:16:04 -- common/autotest_common.sh@10 -- # set +x 00:24:50.652 04:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.652 04:16:04 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:50.652 04:16:04 -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:50.652 04:16:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:50.652 04:16:04 -- nvmf/common.sh@117 -- # sync 00:24:50.652 04:16:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.652 04:16:04 -- nvmf/common.sh@120 -- # set +e 00:24:50.652 04:16:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.652 04:16:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.652 rmmod nvme_tcp 00:24:50.652 rmmod nvme_fabrics 00:24:50.652 rmmod nvme_keyring 00:24:50.652 04:16:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.652 04:16:04 -- nvmf/common.sh@124 -- # set -e 00:24:50.652 04:16:04 -- nvmf/common.sh@125 -- # return 0 00:24:50.652 04:16:04 -- nvmf/common.sh@478 -- # '[' -n 3960436 ']' 00:24:50.652 04:16:04 -- nvmf/common.sh@479 -- # killprocess 3960436 00:24:50.652 04:16:04 -- common/autotest_common.sh@936 -- # '[' -z 3960436 ']' 00:24:50.652 04:16:04 -- common/autotest_common.sh@940 -- # kill -0 3960436 00:24:50.652 04:16:04 -- common/autotest_common.sh@941 -- # uname 00:24:50.652 04:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:50.652 04:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3960436 00:24:50.652 04:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:50.652 04:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:50.652 04:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3960436' 00:24:50.652 killing process with pid 3960436 00:24:50.652 04:16:04 -- common/autotest_common.sh@955 -- # kill 3960436 00:24:50.652 04:16:04 -- common/autotest_common.sh@960 -- # wait 3960436 00:24:50.652 04:16:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:50.652 04:16:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:50.652 04:16:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:50.652 04:16:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.652 04:16:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.652 04:16:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.652 04:16:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.652 04:16:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.552 04:16:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.552 00:24:52.552 real 0m26.128s 00:24:52.552 user 1m2.048s 00:24:52.552 sys 0m6.261s 00:24:52.552 04:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:52.552 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.552 ************************************ 00:24:52.552 END TEST nvmf_bdevperf 00:24:52.552 ************************************ 00:24:52.552 04:16:06 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.552 04:16:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:52.552 04:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.552 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:24:52.552 ************************************ 00:24:52.552 START TEST nvmf_target_disconnect 00:24:52.552 ************************************ 00:24:52.552 04:16:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:52.552 * Looking for test storage... 00:24:52.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.552 04:16:06 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.552 04:16:06 -- nvmf/common.sh@7 -- # uname -s 00:24:52.552 04:16:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.552 04:16:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.552 04:16:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.552 04:16:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.552 04:16:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.552 04:16:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.552 04:16:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.552 04:16:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.552 04:16:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.552 04:16:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.552 04:16:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:52.552 04:16:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:52.552 04:16:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.552 04:16:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.552 04:16:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.552 04:16:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.552 04:16:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.552 04:16:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.552 04:16:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.552 04:16:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.552 04:16:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.552 04:16:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.552 04:16:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.552 04:16:06 -- paths/export.sh@5 -- # export PATH 00:24:52.552 04:16:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.552 04:16:06 -- nvmf/common.sh@47 -- # : 0 00:24:52.552 04:16:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.552 04:16:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.552 04:16:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.552 04:16:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.552 04:16:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.552 04:16:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.552 04:16:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.552 04:16:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.552 04:16:06 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:52.552 04:16:06 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:52.552 04:16:06 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:52.552 04:16:06 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:24:52.552 04:16:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:52.552 04:16:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.552 04:16:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:52.552 04:16:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:52.552 04:16:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:52.552 04:16:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.552 04:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.552 04:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.552 04:16:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:52.552 04:16:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:52.552 04:16:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.552 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 04:16:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:59.115 04:16:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.115 04:16:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.115 04:16:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.115 04:16:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.115 04:16:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.115 04:16:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.115 04:16:12 -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.115 04:16:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.115 04:16:12 -- nvmf/common.sh@296 -- # e810=() 00:24:59.115 04:16:12 -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.115 04:16:12 -- nvmf/common.sh@297 -- # x722=() 00:24:59.115 04:16:12 -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.115 04:16:12 -- nvmf/common.sh@298 -- # mlx=() 00:24:59.115 04:16:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.115 04:16:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.115 04:16:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.115 04:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:59.115 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:59.115 04:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.115 04:16:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:59.115 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:59.115 04:16:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.115 04:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.115 04:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.115 04:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:59.115 Found net devices under 0000:af:00.0: cvl_0_0 00:24:59.115 04:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.115 04:16:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.115 04:16:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.115 04:16:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:59.115 Found net devices under 0000:af:00.1: cvl_0_1 00:24:59.115 04:16:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:59.115 04:16:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:59.115 04:16:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.115 04:16:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.115 04:16:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.115 04:16:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.115 04:16:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.115 04:16:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.115 04:16:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.115 04:16:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.115 04:16:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.115 04:16:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.115 04:16:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.115 04:16:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.115 04:16:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.115 04:16:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.115 04:16:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.115 04:16:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.115 04:16:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.115 04:16:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.115 04:16:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:24:59.115 00:24:59.115 --- 10.0.0.2 ping statistics --- 00:24:59.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.115 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:59.115 04:16:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:59.115 00:24:59.115 --- 10.0.0.1 ping statistics --- 00:24:59.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.115 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:59.115 04:16:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.115 04:16:12 -- nvmf/common.sh@411 -- # return 0 00:24:59.115 04:16:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:59.115 04:16:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.115 04:16:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:59.115 04:16:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.115 04:16:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:59.115 04:16:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:59.115 04:16:12 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:59.115 04:16:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:59.115 04:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.115 04:16:12 -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 ************************************ 00:24:59.115 START TEST nvmf_target_disconnect_tc1 00:24:59.115 ************************************ 00:24:59.115 04:16:12 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:24:59.115 04:16:12 -- host/target_disconnect.sh@32 -- # set +e 00:24:59.115 04:16:12 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.115 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.115 [2024-04-19 04:16:13.002322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.115 [2024-04-19 04:16:13.002639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.115 [2024-04-19 04:16:13.002661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b2ed0 with addr=10.0.0.2, port=4420 00:24:59.115 [2024-04-19 04:16:13.002699] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:59.115 [2024-04-19 04:16:13.002720] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:59.115 [2024-04-19 04:16:13.002731] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:59.115 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:59.115 Initializing NVMe Controllers 00:24:59.115 04:16:13 -- host/target_disconnect.sh@33 -- # trap - ERR 00:24:59.115 04:16:13 -- host/target_disconnect.sh@33 -- # print_backtrace 00:24:59.115 04:16:13 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:24:59.115 04:16:13 -- common/autotest_common.sh@1139 -- # return 0 00:24:59.115 04:16:13 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:24:59.115 04:16:13 -- host/target_disconnect.sh@41 -- # set -e 00:24:59.115 00:24:59.115 real 0m0.121s 00:24:59.115 user 0m0.047s 00:24:59.115 sys 0m0.074s 00:24:59.115 04:16:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:59.115 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 ************************************ 00:24:59.115 END TEST nvmf_target_disconnect_tc1 00:24:59.115 ************************************ 00:24:59.115 04:16:13 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:59.115 04:16:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:59.115 04:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:59.115 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.115 ************************************ 00:24:59.115 START TEST nvmf_target_disconnect_tc2 00:24:59.115 ************************************ 00:24:59.116 04:16:13 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:24:59.116 04:16:13 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:24:59.116 04:16:13 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:59.116 04:16:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:59.116 04:16:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 04:16:13 -- nvmf/common.sh@470 -- # nvmfpid=3965922 00:24:59.116 04:16:13 -- nvmf/common.sh@471 -- # waitforlisten 3965922 00:24:59.116 04:16:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:59.116 04:16:13 -- common/autotest_common.sh@817 -- # '[' -z 3965922 ']' 00:24:59.116 04:16:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.116 04:16:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:59.116 04:16:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.116 04:16:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 [2024-04-19 04:16:13.242772] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:24:59.116 [2024-04-19 04:16:13.242823] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.116 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.116 [2024-04-19 04:16:13.327493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.116 [2024-04-19 04:16:13.416950] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.116 [2024-04-19 04:16:13.416994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.116 [2024-04-19 04:16:13.417004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.116 [2024-04-19 04:16:13.417013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.116 [2024-04-19 04:16:13.417020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.116 [2024-04-19 04:16:13.417131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:59.116 [2024-04-19 04:16:13.417216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:59.116 [2024-04-19 04:16:13.417331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.116 [2024-04-19 04:16:13.417331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:59.116 04:16:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.116 04:16:13 -- common/autotest_common.sh@850 -- # return 0 00:24:59.116 04:16:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:59.116 04:16:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 04:16:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.116 04:16:13 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 Malloc0 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 [2024-04-19 04:16:13.589711] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 [2024-04-19 04:16:13.621947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:59.116 04:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.116 04:16:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.116 04:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.116 04:16:13 -- host/target_disconnect.sh@50 -- # reconnectpid=3966055 00:24:59.116 04:16:13 -- host/target_disconnect.sh@52 -- # sleep 2 00:24:59.116 04:16:13 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:59.373 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.276 04:16:15 -- host/target_disconnect.sh@53 -- # kill -9 3965922 00:25:01.276 04:16:15 -- host/target_disconnect.sh@55 -- # sleep 2 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Write completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.276 Read completed with error (sct=0, sc=8) 00:25:01.276 starting I/O failed 00:25:01.277 [2024-04-19 04:16:15.650686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 [2024-04-19 04:16:15.650868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 [2024-04-19 04:16:15.651151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Read completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 Write completed with error (sct=0, sc=8) 00:25:01.277 starting I/O failed 00:25:01.277 [2024-04-19 04:16:15.651434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:01.277 [2024-04-19 04:16:15.651757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.652058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.652090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-04-19 04:16:15.652364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.652583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.652613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-04-19 04:16:15.652906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.653203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.653238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-04-19 04:16:15.653516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.653746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.653777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-04-19 04:16:15.654086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.654293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-04-19 04:16:15.654323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.654621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.654889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.654919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.655158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.655371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.655402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.655677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.655948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.655977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.656281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.656528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.656559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.656716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.656889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.656918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.657213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.657527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.657556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.657877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.658123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.658152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.658377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.658640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.658671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.658888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.659092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.659122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.659427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.659580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.659611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.659827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.660071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.660101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.660345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.660568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.660581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.660786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.661045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.661076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.661305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.661531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.661541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.661793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.662264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.662695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.662957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.663170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.663284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.663295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.663527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.663815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.663845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.664052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.664325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.664363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.664582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.664811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.664842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.665143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.665362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.665394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.665671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.665957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.665987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.666233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.666474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.666505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.666807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.667049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.667078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.667379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.667650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.667681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.667921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.668193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.668223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.668537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.668760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.668789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-04-19 04:16:15.669080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-04-19 04:16:15.669402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.669433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.669708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.669952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.669982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.670252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.670545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.670575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.670802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.671091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.671121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.671461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.671754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.671784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.671943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.672279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.672310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.672610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.672799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.672829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.673075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.673289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.673300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.673467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.673700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.673710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.673908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.674134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.674145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.674346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.674599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.674609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.674799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.675011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.675021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.675224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.675484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.675495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.675733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.676004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.676014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.676246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.676513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.676523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.676791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.677270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.677740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.677947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.678225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.678524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.678535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.678796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.679215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.679527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.679751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.679990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.680220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.680250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.680560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.680745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.680774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.681047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.681337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.681376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-04-19 04:16:15.681675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-04-19 04:16:15.681967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.681997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.682272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.682442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.682452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.682687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.682960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.682991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.683297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.683604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.683634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.683792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.684325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.684656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.684826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.685014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.685142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.685172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.685389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.685620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.685630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.685797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.686012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.686042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.686284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.686513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.686543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.686753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.686972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.687002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.687545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.687576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.687797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.688160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.688653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.688934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.689167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.689410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.689420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.689607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.689867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.689897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.690111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.690322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.690362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.690571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.690792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.690821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.691069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.691337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.691376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.691625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.691761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.691791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.691997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.692449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.692764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.692995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-04-19 04:16:15.693249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.693471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-04-19 04:16:15.693502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.693708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.693802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.693813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.693915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.694154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.694184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.694352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.694556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.694587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.694802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.695005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.695035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.695237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.695449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.695480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.695784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.696002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.696031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.696268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.696566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.696597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.696812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.697360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.697755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.697880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.697998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.698238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.698248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.698480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.698659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.698669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.698909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.699214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.699587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.699768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.699927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.700297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.700717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.700844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.701075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.701173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.701184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-04-19 04:16:15.701290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.701463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-04-19 04:16:15.701474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.701588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.701750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.701761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.701864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.702206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.702554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.702806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.702975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.703354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.703690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.703803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.703896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.704256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.704559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.704742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.704856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.705147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.705579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.705710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.705871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.706305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.706698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.706803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.706908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.707123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.707134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.707310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.707544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.707577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.707798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.708073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.708103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.708410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.708618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.708653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.708832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.709074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.709103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.709372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.709644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.709674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.709884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.710151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.710188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.710395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.710644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.710655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.710937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.711108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.711118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-04-19 04:16:15.711282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-04-19 04:16:15.711494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.711526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.711684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.711952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.711962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.712121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.712331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.712374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.712680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.712896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.712926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.713262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.713464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.713496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.713715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.713918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.713929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.714102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.714368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.714398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.714610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.714811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.714841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.715066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.715211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.715241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.715464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.715733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.715763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.715913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.716307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.716685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.716952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.717211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.717479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.717510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.717673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.717880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.717890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.718051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.718312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.718350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.718558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.718789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.718818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.719044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.719496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.719713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.719879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.720009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.720122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.720152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.720452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.720676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.720706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.720925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.721143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.721172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.721381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.721620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.721650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.721955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.722254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.722284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.722492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.722783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.722812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.723087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.723305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.723334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-04-19 04:16:15.723617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.723878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-04-19 04:16:15.723899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.724166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.724176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.724349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.724581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.724611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.724829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.725094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.725124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.725421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.725573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.725583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.725782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.726199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.726584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.726824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.727083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.727192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.727206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.727417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.727685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.727699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.727976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.728473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.728802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.728928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.729182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.729479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.729510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.729716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.730024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.730054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.730271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.730514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.730545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.730750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.730990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.731020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.731167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.731410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.731441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.731744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.731899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.731929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.732146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.732439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.732470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.732678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.732819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.732848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.733061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.733204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.733234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.733437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.733595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.733625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.733923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.734142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.734171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.734376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.734595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.734625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.734778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.734975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.735003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.735237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.735383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.735414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-04-19 04:16:15.735628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.735761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-04-19 04:16:15.735770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.736028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.736213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.736242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.736459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.736750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.736779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.737109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.737305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.737375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.737589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.737784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.737794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.737978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.738208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.738239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.738456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.738660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.738669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.739441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.739693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.739704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.739884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.739993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.740002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.740259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.740362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.740388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.740549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.740782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.740792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.740893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.741175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.741537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.741791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.741966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.742175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.742569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.742743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.742857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.743333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.743561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.743804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.743926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.744266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.744567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.744855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.744989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.745018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.745245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.745379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.745411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.745555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.745739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.745750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-04-19 04:16:15.745864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-04-19 04:16:15.746040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.746050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.746309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.746320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.746496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.746677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.746687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.746846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.747234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.747584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.747758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.747921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.748356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.748798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.748935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.749166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.749284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.749315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.749588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.749744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.749766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.749956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.750368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.750688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.750862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.750992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.751226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.751265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.751541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.751674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.751704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.751940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.752135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.752165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.752409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.752612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.752642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.752845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.753127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-04-19 04:16:15.753156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-04-19 04:16:15.753369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.753589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.753619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.753799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.754298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.754716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.754913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.755125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.755305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.755335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.755643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.756232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.756666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.756898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.757108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.757320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.757363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.757664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.757795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.757824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.757970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.758208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.758237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.758463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.758701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.758731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.758935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.759089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.759119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.759327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.759475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.759506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.759782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.759998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.760027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.760190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.760339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.760379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.760592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.760813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.760842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.760957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.761221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.761251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.761389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.761685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.761714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.761850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.762132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.762163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.762483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.762697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.762727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.762928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.763298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.763737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.763942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.764165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.764308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.764337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.764563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.764781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.764811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.765033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.765329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.765367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-04-19 04:16:15.765646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.765873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-04-19 04:16:15.765902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.766055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.766326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.766515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.766626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.766655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.766929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.767086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.767115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.767385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.767690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.767720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.767935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.768094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.768123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.768401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.768599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.768610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.768779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.769011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.769041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.769368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.769498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.769527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.769767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.769971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.770000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.770271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.770421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.770450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.770755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.771248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.771700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.771945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.772165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.772410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.772441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.772679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.772908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.772937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.773238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.773445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.773476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.773708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.773941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.773952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.774080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.774456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.774760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.774933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.775146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.775286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.775316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.775581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.775788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.775828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.776130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.776370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.776403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.776705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.776907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.776937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83b8000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.777138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.777307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.777337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.777552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.777821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.777851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.778124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.778358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.778389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-04-19 04:16:15.778705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.778906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-04-19 04:16:15.778936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.779253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.779614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.779655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.779837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.780091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.780121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.780330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.780558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.780588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.780870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.781196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.781226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.781437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.781575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.781605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.781819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.782054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.782085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.782386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.782531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.782560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.782778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.783274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.783642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.783943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.784093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.784299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.784329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.784638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.784916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.784946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.785164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.785384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.785416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.785639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.785782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.785792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.786028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.786325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.786362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.786565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.786781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.786791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.786885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.786994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.787015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.787182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.787279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.787289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.787571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.787788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.787817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.788077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.788251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.788261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.788549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.788817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.788828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.789083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.789174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.789202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.789455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.789676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.789705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.789977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.790246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.790276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.790559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.790835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.790865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.791020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.791223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.791252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-04-19 04:16:15.791471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.791716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-04-19 04:16:15.791746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.792015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.792162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.792191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.792512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.792740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.792753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.792990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.793301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.793797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.793956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.794228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.794445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.794488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.794735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.794851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.794863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.795028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.795128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.795138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.795240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.795458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.795488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.795800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.796074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.796107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.796260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.796424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-04-19 04:16:15.796470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-04-19 04:16:15.796793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.796902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.796912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.797171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.797401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.797412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.797514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.797695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.797705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.797965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.798181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.798494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.798663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.798775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.799295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.799624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.799805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.800024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.800224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.800255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.800476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.800627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.800656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.800978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.801201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.801480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.801688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.801988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.802201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.802363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.802393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.557 qpair failed and we were unable to recover it. 00:25:01.557 [2024-04-19 04:16:15.802598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.557 [2024-04-19 04:16:15.802903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.802933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.803138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.803282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.803312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.803557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.803788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.803818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.804025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.804339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.804682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.804877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.805180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.805478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.805522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.805683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.805910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.805940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.806147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.806370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.806402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.806637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.806788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.806818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.806949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.807184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.807213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.807359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.807555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.807585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.807805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.808070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.808098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.808302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.808585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.808615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.808814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.809224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.809552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.809830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.810084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.810284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.810314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.810540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.810684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.810714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.810897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.811245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.811786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.811956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.812227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.812387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.812418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.812623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.812929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.812959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.813240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.813532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.813564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.813800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.814276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.814762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.814997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.558 [2024-04-19 04:16:15.815201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.815499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.558 [2024-04-19 04:16:15.815530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.558 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.815773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.816222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.816759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.816998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.817173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.817302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.817332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.817562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.817769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.817779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.817881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.818159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.818571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.818744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.818922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.819075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.819105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.819399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.819700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.819737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.819954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.820246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.820529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.820757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.820977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.821190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.821220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.821506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.821711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.821741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.821949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.822439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.822783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.822967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.823179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.823381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.823412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.823710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.824361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.824727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.824970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.825224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.825441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.825472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.825717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.825926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.825936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.826036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.826332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.826370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.826574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.826776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.826806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.827080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.827309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.827338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.827586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.827891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.827921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.559 qpair failed and we were unable to recover it. 00:25:01.559 [2024-04-19 04:16:15.828226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.828525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.559 [2024-04-19 04:16:15.828556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.828724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.828939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.828948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.829058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.829257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.829287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.829446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.829670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.829699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.829908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.830146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.830156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.830353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.830546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.830577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.830796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.831361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.831760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.831961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.832197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.832424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.832455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.832737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.833232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.833747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.833918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.834109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.834299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.834329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.834552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.834823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.834852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.835003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.835224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.835254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.835398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.835629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.835659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.835931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.836138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.836169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.836510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.836683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.836693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.836816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.837003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.837033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.837289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.837461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.837492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.837781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.838023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.838053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.838285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.838573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.838604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.838881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.839360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.839822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.839940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.840185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.840315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.840354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.840492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.840723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.840760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.840953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.841094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.841123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.560 qpair failed and we were unable to recover it. 00:25:01.560 [2024-04-19 04:16:15.841326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.560 [2024-04-19 04:16:15.841538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.841568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.841846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.842017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.842046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.842279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.842439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.842470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.842746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.843017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.843047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.843273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.843541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.843572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.843889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.844224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.844750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.844909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.845111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.845443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.845474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.845688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.846185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.846671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.846854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.847124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.847253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.847282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.847441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.847661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.847691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.847910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.848120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.848150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.848724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.848754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.848983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.849147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.849158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.849321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.849611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.849642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.850141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.850171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.850503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.850717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.850747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.850949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.851155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.851185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.851483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.851706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.851736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.852049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.852290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.852320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.852539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.852740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.852770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.852985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.853169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.853178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.561 qpair failed and we were unable to recover it. 00:25:01.561 [2024-04-19 04:16:15.853341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.853632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.561 [2024-04-19 04:16:15.853662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.853956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.854140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.854170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.854385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.854526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.854556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.854835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.855054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.855085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.855389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.855604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.855634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.855937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.856168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.856199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.856407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.856682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.856712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.856956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.857097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.857126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.857398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.857597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.857627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.857833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.857984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.858014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.858219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.858369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.858400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.858612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.858810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.858839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.858978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.859360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.859729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.859862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.860131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.860328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.860366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.860502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.860715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.860746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.860953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.861440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.861807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.861974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.862144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.862266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.862296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.862524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.862656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.862686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.862840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.863155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.863184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.863419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.863655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.863685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.863853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.864072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.864101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.864429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.864645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.864674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.864892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.865097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.865127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.562 qpair failed and we were unable to recover it. 00:25:01.562 [2024-04-19 04:16:15.865300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.562 [2024-04-19 04:16:15.865528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.865558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.865857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.866276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.866856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.866984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.867285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.867500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.867532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.867674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.867940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.867969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.868244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.868410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.868441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.868672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.868902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.868932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.869207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.869410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.869441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.869610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.869819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.869854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.870135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.870333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.870370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.870536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.870691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.870721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.870960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.871233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.871537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.871826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.871957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.872192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.872362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.872393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.872634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.872778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.872788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.873041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.873234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.873244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.873422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.873758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.873798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.873927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.874219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.874689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.874986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.875279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.875544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.875575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.875965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.875995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.876231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.876556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.876586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.876808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.877296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.877842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.563 [2024-04-19 04:16:15.877996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.563 qpair failed and we were unable to recover it. 00:25:01.563 [2024-04-19 04:16:15.878107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.878376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.878413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.878618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.878887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.878897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.879159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.879304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.879333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.879651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.879860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.879890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.880133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.880361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.880391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.880593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.880798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.880828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.881100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.881267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.881297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.881573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.881776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.881805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.881944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.882120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.882149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.882291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.882566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.882597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.882846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.883122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.883158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.883365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.883713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.883743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.883950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.884156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.884186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.884483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.884777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.884806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.884951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.885159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.885170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.885450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.885629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.885639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.885821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.886021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.886052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.886323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.886546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.886577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.886809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.887268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.887776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.887928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.888133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.888422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.888453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.888605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.888824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.888853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.889073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.889209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.889240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.889444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.889665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.889696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.889994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.890211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.890240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.890453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.890745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.890774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.890981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.891181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.891211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.564 [2024-04-19 04:16:15.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.891714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.564 [2024-04-19 04:16:15.891749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.564 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.891867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.892055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.892085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.892359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.892578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.892608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.892856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.893194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.893532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.893730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.893887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.894016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.894046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.894285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.894522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.894552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.894844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.895234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.895691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.895929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.896157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.896375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.896405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.896902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.897048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.897281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.897311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.897567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.897731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.897760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.897976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.898133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.898163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.898464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.898614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.898643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.898865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.899066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.899096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.899339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.899594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.899624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.899781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.900078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.900108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.900400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.900628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.900639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.900814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.900994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.901004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.901199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.901449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.901460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.901625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.901816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.901827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.902088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.902262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.902273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.902433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.902730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.902741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.902847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.903207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.903447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.565 [2024-04-19 04:16:15.903699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.565 qpair failed and we were unable to recover it. 00:25:01.565 [2024-04-19 04:16:15.903861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.903961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.903972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.904201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.904398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.904408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.904663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.904847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.904857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.904964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.905255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.905546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.905734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.905909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.906358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.906756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.906960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.907059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.907445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.907752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.907861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.908018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.908383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.908757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.908890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.909054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.909376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.909758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.909876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.910055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.910285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.910617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.910883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.910992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.911176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.911406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.911416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.911519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.911717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.911727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.566 qpair failed and we were unable to recover it. 00:25:01.566 [2024-04-19 04:16:15.911892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.566 [2024-04-19 04:16:15.912055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.912065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.912189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.912366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.912376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.912660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.912835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.912845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.912956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.913404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.913689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.913803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.913972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.914362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.914713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.914826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.915007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.915396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.915771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.915955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.916125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.916393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.916403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.916512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.916677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.916688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.916787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.917244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.917537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.917811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.917994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.918109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.918275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.918285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.918463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.918630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.918640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.918732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.918993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.919233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.919483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.919801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.919906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.920011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.920300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.920310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.920508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.920734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.920745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.920856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.921131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.567 qpair failed and we were unable to recover it. 00:25:01.567 [2024-04-19 04:16:15.921474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.567 [2024-04-19 04:16:15.921760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.921866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.922136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.922340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.922669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.922855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.923022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.923408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.923796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.923908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.924104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.924603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.924817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.924970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.925153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.925507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.925728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.925916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.926034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.926334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.926688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.926900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.927027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.927320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.927548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.927820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.927930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.928202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.928471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.928686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.928845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.929199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.929522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.929702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.929951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.930235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.930456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.930693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.568 qpair failed and we were unable to recover it. 00:25:01.568 [2024-04-19 04:16:15.930920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.568 [2024-04-19 04:16:15.931101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.931111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.931292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.931502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.931513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.931744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.931857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.931868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.932123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.932383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.932757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.932869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.932977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.933439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.933723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.933894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.934061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.934422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.934811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.934925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.935093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.935359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.935370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.935530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.935690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.935702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.935863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.936231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.936575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.936696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.936925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.937285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.937575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.937839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.937953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.938130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.938430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.938638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.938819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.938927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.939302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.939582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.939881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.569 [2024-04-19 04:16:15.939987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.569 qpair failed and we were unable to recover it. 00:25:01.569 [2024-04-19 04:16:15.940090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.940372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.940772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.940887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.941120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.941333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.941606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.941811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.941920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.942019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.942385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.942670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.942823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.942933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.943213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.943576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.943747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.943915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.944184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.944464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.944718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.944958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.945222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.945479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.945490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.945656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.945833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.945844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.946025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.946426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.946709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.946883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.946986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.947449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.947698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.947796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.947957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.948248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.948527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.948720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.570 qpair failed and we were unable to recover it. 00:25:01.570 [2024-04-19 04:16:15.948819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.570 [2024-04-19 04:16:15.949047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.949057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.949232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.949526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.949536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.949657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.949940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.949950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.950049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.950335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.950549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.950664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.950775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.951236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.951608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.951792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.951943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.952383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.952648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.952841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.952958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.953331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.953680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.953814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.953928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.954263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.954588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.954782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.954892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.955244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.955533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.955654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.955892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.956204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.956496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.956683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.956858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.957038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.571 [2024-04-19 04:16:15.957049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.571 qpair failed and we were unable to recover it. 00:25:01.571 [2024-04-19 04:16:15.957222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.957465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.957788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.957976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.958153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.958396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.958407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.958583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.958780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.958791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.958882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.959371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.959708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.959798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.959888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.960272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.960569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.960773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.961048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.961274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.961690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.961851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.962079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.962464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.962760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.962871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.963041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.963273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.963593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.963836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.964035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.964336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.964785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.964894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.965188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.965478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.965720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.965818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.966044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.966154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.966164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.572 qpair failed and we were unable to recover it. 00:25:01.572 [2024-04-19 04:16:15.966334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.572 [2024-04-19 04:16:15.966447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.966458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.966567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.966676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.966687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.966785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.966881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.966890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.966987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.967165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.967457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.967749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.967920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.968097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.968375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.968607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.968836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.968947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.969016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.969341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.969570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.969775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.969885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.970076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.970360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.970613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.970826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.970999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.971117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.971314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.971616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.971737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.971840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.972234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.972467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.972730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.972903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.973113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.973336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.973563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.573 [2024-04-19 04:16:15.973752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.573 qpair failed and we were unable to recover it. 00:25:01.573 [2024-04-19 04:16:15.973859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.973987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.973997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.974182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.974490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.974759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.974934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.975026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.975267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.975490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.975767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.975883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.975974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.976320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.976544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.976806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.976914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.977008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.977211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.977550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.977719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.977819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.978231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.978524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.978897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.978991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.979257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.979532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.979795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.979882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.980046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.980445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.980634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.980897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.980993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.981181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.981480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.574 [2024-04-19 04:16:15.981827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.574 [2024-04-19 04:16:15.981998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.574 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.982179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.982457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.982806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.982926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.983103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.983333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.983610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.983718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.983878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.984188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.984397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.984727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.984914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.985104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.985416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.985686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.985819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.985932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.986299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.986589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.986694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.986898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.987173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.987455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.987727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.987853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.987957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.988193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.988433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.988701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.988815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.575 qpair failed and we were unable to recover it. 00:25:01.575 [2024-04-19 04:16:15.988904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.575 [2024-04-19 04:16:15.989016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.989026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.989126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.989282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.989292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.989451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.989638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.989649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.989840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.990206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.990508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.990743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.990929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.991159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.991393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.991594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.991812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.991983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.992156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.992406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.992417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.992589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.992875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.992887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.993123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.993352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.993362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.993545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.993704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.993714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.993946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.994224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.994620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.994828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.994997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.995190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.995356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.995367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.995539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.995701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.995712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.995876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.996162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.996445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.996721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.996892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.996992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.997238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.997599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.997708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.997871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.998060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.576 [2024-04-19 04:16:15.998082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.576 qpair failed and we were unable to recover it. 00:25:01.576 [2024-04-19 04:16:15.998160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.998327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.998338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:15.998432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.998679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.998690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:15.998880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:15.999310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:15.999527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:15.999735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:15.999908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.000107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.000492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.000854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.000954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.001117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.001400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.001725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.001831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.001995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.002452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.002754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.002901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.003097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.003536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.003775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.003956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.004051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.004293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.004522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.004759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.004925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.005303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.005658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.005918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.006156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.006412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.006850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.006963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.007121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.007282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.007293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.577 qpair failed and we were unable to recover it. 00:25:01.577 [2024-04-19 04:16:16.007586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.007828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.577 [2024-04-19 04:16:16.007839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.007981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.008315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.008608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.008791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.008968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.009262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.009556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.009770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.009887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.010290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.010736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.010919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.011093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.011303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.011314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.011433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.011540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.011551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.011811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.012214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.012511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.012686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.012918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.013211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.013523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.013715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.013889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.014416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.014647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.014825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.015088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.015400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.015692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.015815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.015989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.016364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.016689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.016838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.017058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.017260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.017271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.017559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.017724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.017736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.578 qpair failed and we were unable to recover it. 00:25:01.578 [2024-04-19 04:16:16.017935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.578 [2024-04-19 04:16:16.018135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.018146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.018334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.018603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.018614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.018796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.019181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.019410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.019688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.019873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.020052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.020219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.020232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.020434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.020612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.020623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.020869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.021222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.021587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.021762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.021873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.022278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.022628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.022735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.022912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.023318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.023750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.023873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.023986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.024288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.024602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.024848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.024973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.025420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.025627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.025744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.025849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.026247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.026504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.026871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.026994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.027188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.027303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.027316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.579 qpair failed and we were unable to recover it. 00:25:01.579 [2024-04-19 04:16:16.027578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.579 [2024-04-19 04:16:16.027796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.027862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.027999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.028011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.028103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.028346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.028357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.028530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.028786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.028797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.028992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.029307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.029776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.029952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.030267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.030446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.030458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.030588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.030690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.030701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.030938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.031315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.031659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.031777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.031940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.032373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.032749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.032975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.033146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.033366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.033751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.033985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.034166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.034466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.034766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.034955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.035130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.035415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.035721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.035963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.036127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.036245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.580 [2024-04-19 04:16:16.036256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.580 qpair failed and we were unable to recover it. 00:25:01.580 [2024-04-19 04:16:16.036379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.036497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.036508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.036684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.036802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.036813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.037071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.037418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.037669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.037757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.037913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.038358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.038585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.038766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.038912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.039294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.039657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.039875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.040060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.040236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.040247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.040428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.040658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.040669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.040842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.041193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.041804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.041934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.042055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.042482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.042823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.042930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.043039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.043403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.043707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.043820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.043995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.044213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.044540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.044813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.045065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.045455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.045792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.045970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.581 [2024-04-19 04:16:16.046142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.046319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.581 [2024-04-19 04:16:16.046330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.581 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.046427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.046601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.046612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.046767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.047196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.047463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.047710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.048042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.048286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.048621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.048748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.049009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.049448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.049722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.049908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.049984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.050223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.050640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.050757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.050862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.051207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.051424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.051606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.051766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.052186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.052436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.052716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.052928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.053157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.053438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.053724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.053913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.054019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.054369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.054676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.054862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.054951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.055228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.055239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.055339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.055532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.055544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.582 qpair failed and we were unable to recover it. 00:25:01.582 [2024-04-19 04:16:16.055743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.582 [2024-04-19 04:16:16.055930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.055941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.056106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.056514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.056806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.056983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.057164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.057442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.057453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.057707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.057954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.057966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.058149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.058337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.058352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.058584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.058748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.058759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.058873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.059265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.059579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.059710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.059816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.060254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.060636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.060822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.060982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.061195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.061509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.061857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.061971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.062141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.062316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.062327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.062445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.062705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.062715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.062974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.063371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.063650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.063839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.063947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.064337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.064727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.064914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.065024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.065134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.065144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.065269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.065402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.065413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.583 qpair failed and we were unable to recover it. 00:25:01.583 [2024-04-19 04:16:16.065599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.583 [2024-04-19 04:16:16.065710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.065721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.065830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.065989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.066117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.066404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.066702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.066941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.067046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.067479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.067707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.067899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.068077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.068290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.068587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.068778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.068879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.069180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.069620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.069795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.069907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.070196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.070502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.070795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.070978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.071070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.071360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.071371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.071585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.071794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.071811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.071983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.072243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.072255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.072367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.072544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.072555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.072863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.073064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.073074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.073274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.073539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.073556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.073823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.074166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.584 qpair failed and we were unable to recover it. 00:25:01.584 [2024-04-19 04:16:16.074391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.584 [2024-04-19 04:16:16.074635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.585 qpair failed and we were unable to recover it. 00:25:01.585 [2024-04-19 04:16:16.074737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.585 [2024-04-19 04:16:16.074898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.585 [2024-04-19 04:16:16.074909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.585 qpair failed and we were unable to recover it. 00:25:01.585 [2024-04-19 04:16:16.075163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.585 [2024-04-19 04:16:16.075327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.585 [2024-04-19 04:16:16.075348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.585 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.075589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.075702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.075714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.075903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.076243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.076623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.076825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.076938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.077257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.077548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.077842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.077943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.078398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.078759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.078876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.078988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.079334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.079597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.079845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.079973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.080268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.080700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.080899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.081160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.081338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.081353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.081603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.081720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.081731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.081914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.082211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.082579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.082783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.082895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.083239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.862 [2024-04-19 04:16:16.083728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.862 [2024-04-19 04:16:16.083938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.862 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.084154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.084332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.084351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.084449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.084630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.084642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.084806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.085241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.085533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.085830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.085965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.086132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.086538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.086828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.086972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.087167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.087447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.087819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.087945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.088130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.088511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.088890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.088992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.089071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.089235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.089245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.089477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.089598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.089608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.089826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.090215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.090515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.090739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.090937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.091102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.091382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.091648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.091853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.092084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.092245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.092255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.092437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.092546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.092556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.092718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.092996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.093006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.093179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.093362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.093372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.863 qpair failed and we were unable to recover it. 00:25:01.863 [2024-04-19 04:16:16.093541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.863 [2024-04-19 04:16:16.093712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.093725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.093852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.093975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.093985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.094104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.094471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.094849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.094963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.095066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.095443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.095663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.095836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.095964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.096286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.096588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.096872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.096983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.097265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.097521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.097530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.097625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.097852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.097862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.097971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.098352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.098578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.098701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.098899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.099257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.099629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.099841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.100093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.100377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.100676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.100916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.101093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.101387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.101832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.101999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.102167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.102388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.102613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.864 [2024-04-19 04:16:16.102783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.864 qpair failed and we were unable to recover it. 00:25:01.864 [2024-04-19 04:16:16.102894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.103291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.103773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.103879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.103990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.104265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.104620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.104802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.104903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.105246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.105589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.105815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.105927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.106102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.106366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.106724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.106963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.107147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.107309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.107319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.107424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.107586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.107596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.107772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.108172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.108628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.108800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.109044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.109402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.109705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.109834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.110036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.110185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.110195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.110459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.110576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.110585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.110840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.111256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.111646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.111867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.111979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.112241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.112251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.865 qpair failed and we were unable to recover it. 00:25:01.865 [2024-04-19 04:16:16.112409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.112646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.865 [2024-04-19 04:16:16.112656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.112885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.113149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.113528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.113808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.114047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.114261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.114271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.114471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.114699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.114708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.114938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.115272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.115682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.115868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.116049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.116427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.116699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.116938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.117050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.117263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.117707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.117891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.118087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.118271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.118280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.118465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.118648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.118657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.118835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.119212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.119662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.119789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.119964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.120264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.120497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.120718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.866 [2024-04-19 04:16:16.120920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.121011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.866 [2024-04-19 04:16:16.121020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.866 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.121186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.121394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.121404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.121652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.121830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.121839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.122071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.122283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.122293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.122402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.122660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.122669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.122834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.123110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.123457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.123740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.123927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.124098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.124372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.124652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.124829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.124938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.125117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.125126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.125297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.125564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.125572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.125821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.126187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.126485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.126784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.126958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.127124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.127359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.127372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.127488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.127653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.127665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.127848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.128195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.128466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.128656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.128853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.129193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.129560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.129726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.129835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.130282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.130627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.867 [2024-04-19 04:16:16.130797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.867 qpair failed and we were unable to recover it. 00:25:01.867 [2024-04-19 04:16:16.130997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.131349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.131710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.131900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.132129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.132363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.132739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.132925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.133084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.133246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.133255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.133357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.133548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.133557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.133825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.134226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.134502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.134776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.134895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.135097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.135359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.135369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.135536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.135696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.135705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.135949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.136224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.136625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.136799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.136961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.137167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.137176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.137274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.137444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.137456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.137763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.138203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.138707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.138972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.139081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.139196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.139208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.139402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.139634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.139644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.139837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.140182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.140589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.140774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.140877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.141075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.141085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.868 [2024-04-19 04:16:16.141315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.141512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.868 [2024-04-19 04:16:16.141522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.868 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.141652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.141747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.141757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.141858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.141926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.141935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.142114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.142536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.142759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.142941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.143193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.143354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.143364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.143631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.143803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.143812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.143921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.144358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.144813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.144913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.145072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.145359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.145712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.145953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.146048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.146415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.146733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.146854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.147013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.147365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.147756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.147934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.148097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.148352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.148362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.148472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.148752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.148761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.148862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.149131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.149414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.149890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.149991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.150000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.150252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.150483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.150493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.150651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.150766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.150776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.151001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.151263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.151272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.869 qpair failed and we were unable to recover it. 00:25:01.869 [2024-04-19 04:16:16.151434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.869 [2024-04-19 04:16:16.151589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.151598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.151769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.151969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.151978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.152140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.152371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.152381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.152504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.152747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.152757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.153079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.153244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.153254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.153450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.153648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.153658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.153892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.154200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.154557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.154741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.154920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.155167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.155177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.155485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.155752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.155762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.155937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.156220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.156472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.156585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.156815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.157129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.157426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.157666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.157862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.158386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.158607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.158720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.158879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.159222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.159525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.159776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.159942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.160177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.160526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.160877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.160985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.161103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.161263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.161273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.161397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.161506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.161516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.870 qpair failed and we were unable to recover it. 00:25:01.870 [2024-04-19 04:16:16.161705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.870 [2024-04-19 04:16:16.161958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.161967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.162086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.162307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.162584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.162860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.162983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.163158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.163319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.163328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.163444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.163697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.163707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.163910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.164198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.164554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.164683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.164882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.165113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.165390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.165757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.165930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.166091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.166406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.166692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.166898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.167036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.167255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.167629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.167733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.167924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.168210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.168564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.168731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.168892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.169036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.169045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.871 [2024-04-19 04:16:16.169153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.169250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.871 [2024-04-19 04:16:16.169259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.871 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.169485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.169714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.169723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.169956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.170226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.170488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.170871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.170984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.171151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.171502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.171780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.171952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.172047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.172348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.172553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.172732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.172848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.173121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.173405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.173751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.173864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.173962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.174138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.174366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.174583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.174792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.174904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.175001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.175279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.175607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.175798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.175898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.176198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.176460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.176788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.176980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.177082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.177259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.177268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.872 qpair failed and we were unable to recover it. 00:25:01.872 [2024-04-19 04:16:16.177379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.872 [2024-04-19 04:16:16.177495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.177504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.177593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.177842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.177851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.177947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.178214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.178552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.178689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.178861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.179221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.179447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.179762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.179938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.180110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.180508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.180794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.180912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.181026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.181321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.181525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.181713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.181996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.182287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.182644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.182928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.183044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.183247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.183572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.183769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.183940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.184317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.184535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.184748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.184851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.185011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.185361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.185708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.185886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.185985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.186151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.873 [2024-04-19 04:16:16.186160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.873 qpair failed and we were unable to recover it. 00:25:01.873 [2024-04-19 04:16:16.186275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.186383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.186392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.186508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.186744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.186753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.186873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.187210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.187509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.187770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.187889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.188047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.188410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.188762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.188948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.189109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.189404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.189699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.189936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.190115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.190414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.190817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.190918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.191151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.191351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.191361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.191550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.191719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.191727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.191832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.192164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.192420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.192786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.192889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.192976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.193322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.193615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.193749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.193858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.194123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.194441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.194648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.194760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.874 qpair failed and we were unable to recover it. 00:25:01.874 [2024-04-19 04:16:16.194944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.195056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.874 [2024-04-19 04:16:16.195065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.195152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.195423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.195690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.195801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.195963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.196218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.196414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.196763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.196930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.197132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.197402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.197690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.197834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.198149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.198416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.198760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.198927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.199104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.199400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.199598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.199853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.199961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.200120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.200335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.200660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.200834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.200937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.201218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.201488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.875 qpair failed and we were unable to recover it. 00:25:01.875 [2024-04-19 04:16:16.201692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.875 [2024-04-19 04:16:16.201875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.202036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.202321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.202607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.202874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.202974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.203134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.203310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.203319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.203573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.203803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.203812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.203906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.204180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.204454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.204794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.204891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.205162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.205450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.205701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.205823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.205976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.206175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.206440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.206802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.206899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.206996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.207270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.207465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.207737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.207861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.207950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.208288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.208563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.876 [2024-04-19 04:16:16.208758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.876 qpair failed and we were unable to recover it. 00:25:01.876 [2024-04-19 04:16:16.208955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.209238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.209448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.209836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.209940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.210050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.210253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.210600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.210804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.210913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.211008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.211369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.211605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.211706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.211898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.212233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.212448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.212699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.212950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.213060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.213288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.213653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.213842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.214004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.214241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.214513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.214740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.214860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.214964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.215245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.215504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.215839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.215994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.216003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.216162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.216266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.216275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.877 qpair failed and we were unable to recover it. 00:25:01.877 [2024-04-19 04:16:16.216437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.877 [2024-04-19 04:16:16.216544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.216552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.216677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.216848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.216857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.217089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.217307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.217700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.217826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.217934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.218277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.218548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.218666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.218831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.219181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.219480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.219685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.219911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.220174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.220444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.220735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.220927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.221025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.221382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.221701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.221781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.221888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.222302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.222593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.222830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.223034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.223358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.223387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.223533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.223760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.223789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.224104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.224289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.224298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.224503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.224660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.224669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.224871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.225035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.225065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.878 qpair failed and we were unable to recover it. 00:25:01.878 [2024-04-19 04:16:16.225278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.225439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.878 [2024-04-19 04:16:16.225469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.225685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.225956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.225984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.226201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.226402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.226432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.226617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.226716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.226725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.226846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.227180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.227649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.227844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.227977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.228232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.228580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.228753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.228885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.229308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.229678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.229909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.230109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.230395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.230425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.230757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.230874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.230883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.231041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.231355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.231755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.231905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.232004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.232204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.232465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.232623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.232869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.233077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.233105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.233376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.233576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.233605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.233889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.234242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.234553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.234735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.234923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.879 qpair failed and we were unable to recover it. 00:25:01.879 [2024-04-19 04:16:16.235159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.235352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.879 [2024-04-19 04:16:16.235361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.235452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.235549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.235558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.235738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.235831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.235840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.236165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.236395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.236653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.236769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.236932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.237303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.237576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.237846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.237961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.238056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.238317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.238535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.238877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.238975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.239133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.239378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.239707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.239893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.239980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.240210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.240634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.240898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.240991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.241000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.880 qpair failed and we were unable to recover it. 00:25:01.880 [2024-04-19 04:16:16.241100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.241189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.880 [2024-04-19 04:16:16.241198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.241379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.241549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.241558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.241645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.241751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.241761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.241868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.242200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.242532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.242740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.242843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.243002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.243387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.243575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.243793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.243904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.244091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.244297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.244584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.244795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.244999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.245373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.245666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.245786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.245950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.246304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.246883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.246989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.247093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.247291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.247519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.247789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.247906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.881 [2024-04-19 04:16:16.248008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.248111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.881 [2024-04-19 04:16:16.248120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.881 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.248287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.248544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.248794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.248943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.249044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.249247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.249706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.249947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.250097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.250371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.250401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.250533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.250674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.250703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.250900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.251314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.251708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.251873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.252042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.252349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.252575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.252862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.252972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.253063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.253152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.253161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.253251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.253472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.253502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.253800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.254235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.254683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.254900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.254989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.255330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.255688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.255828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.256133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.256422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.256452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.256717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.256886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.256915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.882 [2024-04-19 04:16:16.257074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.257238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.882 [2024-04-19 04:16:16.257246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.882 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.257474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.257659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.257668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.257828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.257935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.257963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.258118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.258315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.258353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.258575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.258687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.258716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.258872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.259332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.259791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.259959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.260123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.260255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.260284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.260438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.260650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.260679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.260882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.261360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.261577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.261856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.262010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.262154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.262183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.262361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.262604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.262633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.262776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.263074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.263103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.263351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.263628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.263663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.263773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.264114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.264552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.264773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.264908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.265333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.265718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.265885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.266113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.266263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.266292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.266459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.266704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.266733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.266935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.267076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.267105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.267258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.267466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.267496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.267713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.268001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.268029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.268195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.268329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.268367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.883 qpair failed and we were unable to recover it. 00:25:01.883 [2024-04-19 04:16:16.268615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.883 [2024-04-19 04:16:16.268902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.268911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.269031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.269237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.269552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.269747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.269986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.270138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.270166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.270387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.270518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.270546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.270768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.271269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.271632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.271839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.271969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.272133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.272462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.272737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.272853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.273020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.273308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.273506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.273799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.273895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.273967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.274232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.274535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.274812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.274928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.275034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.275139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.275148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.275355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.275550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.275578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.275839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.276210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.276652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.276861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.276980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.277152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.277235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.277243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.884 [2024-04-19 04:16:16.277333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.277515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.884 [2024-04-19 04:16:16.277524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.884 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.277630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.277738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.277746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.277849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.277947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.277956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.278190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.278313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.278352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.278492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.278695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.278724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.278937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.279276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.279640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.279868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.280015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.280358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.280751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.280988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.281198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.281423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.281453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.281675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.281887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.281916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.282054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.282427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.282807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.282979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.283128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.283351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.283381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.283584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.283793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.283822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.284161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.284525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.284738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.284954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.285115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.285313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.285526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.885 qpair failed and we were unable to recover it. 00:25:01.885 [2024-04-19 04:16:16.285788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.885 [2024-04-19 04:16:16.285960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.286047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.286252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.286455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.286796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.286973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.287123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.287327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.287363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.287566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.287859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.287888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.288087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.288348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.288621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.288807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.289047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.289271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.289300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.289534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.289680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.289708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.290015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.290440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.290823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.290992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.291125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.291542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.291827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.291947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.292069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.292338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.292556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.292819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.292922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.293009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.293213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.293392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.293599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.293706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.293940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.294300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.294496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.886 [2024-04-19 04:16:16.294676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.886 qpair failed and we were unable to recover it. 00:25:01.886 [2024-04-19 04:16:16.294781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.294889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.294898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.294989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.295208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.295451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.295716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.295823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.295981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.296240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.296478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.296691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.296791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.296969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.297314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.297621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.297854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.298088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.298392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.298422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.298554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.298701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.298730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.298943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.299246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.299673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.299841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.299989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.300192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.300466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.300675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.300785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.300951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.301123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.301152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.301392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.301608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.301637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.301933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.302278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.302627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.302749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.302896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.303211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.303601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.303841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.304041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.304220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.304229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-04-19 04:16:16.304350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.887 [2024-04-19 04:16:16.304511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.304520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.304627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.304736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.304745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.304842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.305175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.305515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.305754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.305996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.306297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.306633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.306969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.307233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.307487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.307763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.307959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.308124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.308398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.308614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.308820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.308933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.309088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.309216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.309245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.309390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.309602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.309631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.309781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.310128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.310336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.310545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.310806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.310972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.311064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.311392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.311767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.311934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.312066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.312402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.312742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.312953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.313059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.313413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.313631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-04-19 04:16:16.313827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.888 [2024-04-19 04:16:16.313999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.314008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.314174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.314335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.314370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.314513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.314656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.314685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.314890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.315341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.315690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.315805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.315962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.316233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.316455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.316695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.316853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.317139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.317559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.317793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.317997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.318212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.318415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.318835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.318977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.319006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.319198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.319376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.319406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.319545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.319752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.319781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.320014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.320204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.320233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.320475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.320603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.320631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.320838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.321003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.321031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.321375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.321520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.321549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.321764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.321977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.322155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.322364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.322656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.322857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.322953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.323183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.323398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.323692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.323791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.323900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.324007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.324016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-04-19 04:16:16.324118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.889 [2024-04-19 04:16:16.324349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.324358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.324516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.324675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.324684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.324794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.324913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.324922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.325021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.325300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.325594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.325883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.326035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.326251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.326280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.326570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.326708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.326737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.326872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.327096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.327137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.327373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.327533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.327571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.327850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.328073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.328103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.328317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.328541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.328570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.328707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.329250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.329633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.329803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.329948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.330561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.330862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.330984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.331088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.331384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.331713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.331881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.331969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.332167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.332426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.332648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.332909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.333121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.333382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.333412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.890 qpair failed and we were unable to recover it. 00:25:01.890 [2024-04-19 04:16:16.333686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.333906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.890 [2024-04-19 04:16:16.333934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.334065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.334372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.334641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.334758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.335024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.335218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.335247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.335456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.335657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.335686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.335839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.335973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.336002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.336155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.336301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.336330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.336479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.336684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.336713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.336934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.337250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.337279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.337500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.337670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.337700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.337987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.338108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.338116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.338319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.338626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.338655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.338811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.339113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.339141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.339360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.339668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.339697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.339858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.340386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.340727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.340837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.341017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.341351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.341740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.341907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.342120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.342348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.342356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.342583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.342709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.342738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.342955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.343418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.343730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.343991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.344147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.344420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.344783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.344941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.345126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.345391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.345421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.345564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.345726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.345755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.891 [2024-04-19 04:16:16.345917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.346058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.891 [2024-04-19 04:16:16.346091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.891 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.346221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.346508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.346802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.346977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.347195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.347581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.347857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.347957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.348141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.348355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.348579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.348816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.348934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.349028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.349431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.349640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.349840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.349962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.350118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.350456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.350802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.350902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.350994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.351206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.351528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.351694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.351946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.352159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.352541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.352748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.352873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.352972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.353255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.353646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.353814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.354018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.354330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.354740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.892 [2024-04-19 04:16:16.354862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.892 qpair failed and we were unable to recover it. 00:25:01.892 [2024-04-19 04:16:16.355038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.355216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.355257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.355462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.355671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.355701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.355938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.356247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.356664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.356846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.357059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.357464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.357767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.357999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.358215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.358488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.358518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.358670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.358825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.358854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.359068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.359377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.359752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.359941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.360086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.360498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.360798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.360907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.361002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.361259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.361291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.361438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.361583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.361613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.361749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.362104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.362470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.362791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.362892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.362980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.363333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.363731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.363960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.364091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.364287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.364316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.364463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.364609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.364638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.364796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.365311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.365648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.365909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.366059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.366213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.366255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.366549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.366761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.366791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.366941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.367139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.367168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.893 qpair failed and we were unable to recover it. 00:25:01.893 [2024-04-19 04:16:16.367318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.893 [2024-04-19 04:16:16.367474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.367503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.367707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.367834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.367863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.368072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.368279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.368307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.368475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.368697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.368737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.368929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.369354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.369577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.369759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.369859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.370270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.370670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.370923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:01.894 [2024-04-19 04:16:16.371162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.371319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.894 [2024-04-19 04:16:16.371329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:01.894 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.371520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.371616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.371625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.371888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.372225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.372556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.372828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.372996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.373100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.373384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.373715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.373837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.373939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.374030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.374039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.168 qpair failed and we were unable to recover it. 00:25:02.168 [2024-04-19 04:16:16.374130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.168 [2024-04-19 04:16:16.374245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.374354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.374560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.374788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.374960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.375119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.375317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.375544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.375895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.375990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.376155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.376375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.376591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.376815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.376999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.377167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.377410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.377681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.377892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.377998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.378104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.378307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.378563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.378765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.378889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.379056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.379356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.379615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.379887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.380030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.380361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.380818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.380973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.381104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.381236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.381265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.381404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.381551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.381580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.169 qpair failed and we were unable to recover it. 00:25:02.169 [2024-04-19 04:16:16.381725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.169 [2024-04-19 04:16:16.381926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.381966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.382062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.382332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.382609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.382836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.382952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.383064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.383331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.383599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.383765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.383906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.384266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.384715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.384953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.385088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.385355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.385689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.385849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.385991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.386193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.386221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.386366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.386567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.386596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.386866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.387185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.387497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.387754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.387863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.388026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.388239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.388600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.388857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.389002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.389391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.389625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.389815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.389986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.390166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.390414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.390443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.170 qpair failed and we were unable to recover it. 00:25:02.170 [2024-04-19 04:16:16.390649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.390960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.170 [2024-04-19 04:16:16.390988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.391234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.391438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.391447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.391617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.391720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.391728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.391907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.392182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.392421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.392785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.392982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.393077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.393180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.393189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.393444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.393613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.393622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.393883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.394291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.394641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.394826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.395032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.395414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.395731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.395908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.396054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.396414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.396646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.396815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.396927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.397201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.397448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.397741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.397983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.398164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.398493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.398789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.398907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.399002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.399361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.399818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.399988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.400259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.400470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.171 [2024-04-19 04:16:16.400479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.171 qpair failed and we were unable to recover it. 00:25:02.171 [2024-04-19 04:16:16.400638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.400888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.400896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.401091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.401489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.401703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.401961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.402063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.402347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.402693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.402886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.402993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.403161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.403429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.403628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.403795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.403967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.404253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.404545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.404655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.404823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.405134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.405424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.405703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.405810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.405901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.406106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.406326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.406538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.406715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.406944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.407237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.407537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.407775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.407936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.408227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.408604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.408784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.172 qpair failed and we were unable to recover it. 00:25:02.172 [2024-04-19 04:16:16.409045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.172 [2024-04-19 04:16:16.409145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.409387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.409594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.409828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.409921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.410027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.410358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.410709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.410817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.410983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.411171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.411528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.411743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.411922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.412020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.412359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.412578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.412840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.412956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.413133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.413353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.413630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.413862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.413973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.414092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.414454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.414770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.414865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.173 qpair failed and we were unable to recover it. 00:25:02.173 [2024-04-19 04:16:16.415046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-04-19 04:16:16.415211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.415220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.415450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.415550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.415558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.415680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.415930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.415939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.416044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.416373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.416645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.416758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.416857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.417208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.417439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.417652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.417865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.417973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.418139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.418519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.418866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.418968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.419080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.419291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.419625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.419823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.419925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.420026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.420262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.420605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.420701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.420886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.421176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.421462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.421753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.421965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.422071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.422281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.422515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.174 [2024-04-19 04:16:16.422780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-04-19 04:16:16.422893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.174 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.423052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.423387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.423663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.423830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.424000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.424385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.424591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.424850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.424959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.425155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.425382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.425390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.425528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.425785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.425794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.425968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.426189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.426400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.426804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.426998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.427186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.427428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.427713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.427808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.427917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.428207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.428449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.428787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.428901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.429104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.429387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.429645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.429812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.429910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.430208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.430473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.430841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.430972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.431068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.431246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-04-19 04:16:16.431255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.175 qpair failed and we were unable to recover it. 00:25:02.175 [2024-04-19 04:16:16.431326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.431418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.431428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.431608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.431839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.431848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.432010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.432190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.432467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.432837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.432939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.433171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.433331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.433340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.433573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.433743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.433751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.434006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.434386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.434594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.434711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.434891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.435273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.435712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.435890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.436057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.436229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.436441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.436651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.436764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.436923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.437231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.437610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.437788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.437947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.438246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.438455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.438664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.438766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.438925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.439157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.439560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.439733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.176 qpair failed and we were unable to recover it. 00:25:02.176 [2024-04-19 04:16:16.439826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.176 [2024-04-19 04:16:16.440003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.440012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.440244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.440353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.440362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.440522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.440683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.440692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.440864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.441180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.441521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.441707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.441984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.442340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.442703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.442815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.443085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.443516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.443814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.443937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.444101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.444365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.444740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.444851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.444955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.445305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.445643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.445770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.445938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.446177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.446521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.446721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.446929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.447098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.447386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.447746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.447985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.177 qpair failed and we were unable to recover it. 00:25:02.177 [2024-04-19 04:16:16.448158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.448329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.177 [2024-04-19 04:16:16.448338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.448433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.448535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.448544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.448719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.448884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.448892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.449070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.449381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.449731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.449850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.450028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.450318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.450583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.450689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.450846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.451133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.451483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.451595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.451773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.452180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.452432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.452617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.452852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.453181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.453483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.453599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.453833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.454177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.454506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.454732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.454901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.455129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.455368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.455377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.455537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.455623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.455632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.455834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.456255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.456465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.456578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.178 qpair failed and we were unable to recover it. 00:25:02.178 [2024-04-19 04:16:16.456864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.178 [2024-04-19 04:16:16.457042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.457223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.457493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.457698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.457885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.457999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.458224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.458485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.458750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.458922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.459363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.459787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.459973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.460093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.460437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.460713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.460842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.461036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.461328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.461736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.461999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.462159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.462262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.462271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.462453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.462610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.462619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.462804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.463203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.463662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.463845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.464001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.464092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.464101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.464358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.464585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.464594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.464839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.465248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.465611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.465874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.466059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.466401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.179 [2024-04-19 04:16:16.466742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.179 [2024-04-19 04:16:16.466906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.179 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.467141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.467318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.467326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.467507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.467684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.467693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.467863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.468173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.468461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.468793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.468873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.469102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.469398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.469740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.469866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.470026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.470284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.470752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.470956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.471076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.471351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.471784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.471915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.472109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.472469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.472763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.472928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.473022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.473249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.473258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.473488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.473645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.473653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.473827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.473997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.474103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.474386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.474790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.474955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.475053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.475340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.475640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.475743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.475847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.476017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.476025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.180 qpair failed and we were unable to recover it. 00:25:02.180 [2024-04-19 04:16:16.476151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.180 [2024-04-19 04:16:16.476355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.476364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.476534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.476721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.476730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.476837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.476956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.476965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.477126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.477408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.477417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.477509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.477705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.477713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.477893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.478233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.478582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.478716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.478890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.479286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.479712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.479910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.480137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.480475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.480801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.480994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.481223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.481451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.481460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.481561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.481675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.481684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.481843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.482264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.482764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.482978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.483089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.483472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.483769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.483865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.484024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.484180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.484189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.484349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.484490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.484499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.484779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.485189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.485549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.485787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.485889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.486060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.486069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.486179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.486433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.486442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.181 qpair failed and we were unable to recover it. 00:25:02.181 [2024-04-19 04:16:16.486541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.181 [2024-04-19 04:16:16.486712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.486721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.486981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.487283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.487729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.487857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.487973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.488378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.488847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.488969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.489175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.489362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.489372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.489550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.489639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.489648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.489920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.490223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.490559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.490688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.490884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.491296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.491507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.491692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.491852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.492199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.492479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.492747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.492822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.492999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.493286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.493721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.493849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.494080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.494236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.494245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.494401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.494561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.494570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.182 qpair failed and we were unable to recover it. 00:25:02.182 [2024-04-19 04:16:16.494773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.182 [2024-04-19 04:16:16.494951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.494960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.495159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.495416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.495689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.495891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.495991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.496439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.496708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.496907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.497110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.497458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.497749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.497987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.498159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.498422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.498746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.498847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.499019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.499295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.499720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.499846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.500043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.500337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.500763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.500941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.501119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.501277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.501286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.501544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.501656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.501664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.501922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.502240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.502746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.502866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.503042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.503319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.503705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.503823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.503998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.504157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.504167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.183 qpair failed and we were unable to recover it. 00:25:02.183 [2024-04-19 04:16:16.504399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.183 [2024-04-19 04:16:16.504560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.504569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.504767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.504926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.504934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.505123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.505306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.505315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.505582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.505743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.505751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.505924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.506102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.506111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.506288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.506487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.506496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.506753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.507194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.507611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.507782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.508057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.508140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.508150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.508403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.508656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.508665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.508921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.509272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.509707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.509920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.510181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.510471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.510815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.510993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.511251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.511544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.511819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.511931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.512143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.512554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.512840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.512938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.513027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.513252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.513622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.513805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.513974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.514272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.514604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.514791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.184 qpair failed and we were unable to recover it. 00:25:02.184 [2024-04-19 04:16:16.514950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.184 [2024-04-19 04:16:16.515110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.515120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.515333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.515345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.515516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.515674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.515682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.515912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.516206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.516644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.516911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.517018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.517247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.517255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.517445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.517697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.517706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.517874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.518147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.518526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.518732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.518859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.519226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.519532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.519735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.519913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.520153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.520369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.520771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.520898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.520995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.521331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.521662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.521765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.521942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.522282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.522771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.523081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.523387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.523659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.523824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.524023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.524228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.185 [2024-04-19 04:16:16.524237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.185 qpair failed and we were unable to recover it. 00:25:02.185 [2024-04-19 04:16:16.524354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.524523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.524532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.524838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.525221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.525575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.525863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.525965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.526152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.526357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.526720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.526919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.527180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.527482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.527787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.527903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.528000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.528470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.528807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.528976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.529227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.529417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.529426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.529647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.529824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.529833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.529922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.530244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.530461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.530644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.530806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.531304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.531601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.531781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.531889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.532225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.532502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.532739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.532940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.533199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.533208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.186 qpair failed and we were unable to recover it. 00:25:02.186 [2024-04-19 04:16:16.533449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.186 [2024-04-19 04:16:16.533612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.533621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.533725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.533839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.533848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.534094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.534399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.534760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.534859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.535055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.535545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.535847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.535972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.536235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.536405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.536414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.536582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.536767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.536776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.536956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.537234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.537243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.537431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.537595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.537605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.537884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.538364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.538706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.538898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.539009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.539186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.539195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.539427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.539598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.539606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.539869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.540401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.540770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.187 [2024-04-19 04:16:16.540895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.187 qpair failed and we were unable to recover it. 00:25:02.187 [2024-04-19 04:16:16.541055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.541389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.541740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.541909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.542083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.542425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.542690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.542872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.543036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.543236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.543245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.543424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.543653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.543662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.543857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.544190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.544529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.544766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.545017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.545405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.545733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.545844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.545958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.546231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.546598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.546781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.546888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.547261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.547630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.547796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.548053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.548512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.548729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.548916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.549086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.549335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.549347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.549531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.549787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.549796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.188 [2024-04-19 04:16:16.549992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.550162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.188 [2024-04-19 04:16:16.550171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.188 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.550402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.550506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.550514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.550659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.550914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.550923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.551108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.551381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.551390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.551648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.551905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.551915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.552107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.552546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.552816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.552942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.553185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.553306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.553314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.553472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.553659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.553668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.553837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.554217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.554515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.554824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.554936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.555042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.555423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.555706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.555878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.556115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.556415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.556740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.556845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.556940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.557149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.557467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.557737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.557849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.558024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.189 [2024-04-19 04:16:16.558033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.189 qpair failed and we were unable to recover it. 00:25:02.189 [2024-04-19 04:16:16.558315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.558491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.558501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.558593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.558697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.558706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.558867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.559219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.559590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.559853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.560042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.560153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.560162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.560366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.560596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.560605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.560774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.561185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.561592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.561793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.561937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.562268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.562756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.562943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.563162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.563431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.563461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.563755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.564271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.564737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.564948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.565248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.565386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.565417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.565691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.565801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.565811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.566075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.566274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.566303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.566646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.566961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.566990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.567267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.567500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.567530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.567852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.568120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.568148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.568368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.568582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.568611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.568824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.569277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.190 [2024-04-19 04:16:16.569652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.190 [2024-04-19 04:16:16.569915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.190 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.570186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.570420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.570450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.570674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.570827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.570856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.571163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.571454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.571756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.571862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.572091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.572294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.572323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.572538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.572719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.572748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.572895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.573233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.573262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.573495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.573730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.573759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.573962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.574509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.574781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.574953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.575227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.575523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.575553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.575722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.575931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.575940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.576114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.576217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.576226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.576482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.576682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.576690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.576877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.577086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.577114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.577411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.577678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.577706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.577909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.578108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.578137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.578435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.578729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.578758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.578977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.579243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.579271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.579485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.579682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.579710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.579962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.580152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.580161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.580322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.580550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.580559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.191 qpair failed and we were unable to recover it. 00:25:02.191 [2024-04-19 04:16:16.580789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.580989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.191 [2024-04-19 04:16:16.581008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.581190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.581301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.581310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.581472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.581592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.581601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.581779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.582067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.582095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.582264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.582503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.582533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.582755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.583062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.583091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.583366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.583660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.583689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.583913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.584036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.584066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.584374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.584520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.584549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.584822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.585091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.585120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.585421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.585670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.585679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.585935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.586117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.586126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.586287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.586575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.586605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.586811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.587219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.587708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.587924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.588162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.588428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.588457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.588664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.588883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.588912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.589135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.589368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.589398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.589556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.589764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.589793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.192 qpair failed and we were unable to recover it. 00:25:02.192 [2024-04-19 04:16:16.589997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.590129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.192 [2024-04-19 04:16:16.590158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.590297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.590461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.590491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.590820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.590968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.590996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.591272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.591537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.591546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.591775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.591951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.591959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.592053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.592241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.592270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.592550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.592700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.592735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.592832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.593186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.593547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.593744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.593977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.594272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.594301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.594588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.594846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.594875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.595089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.595298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.595326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.595546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.595847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.595876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.596113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.596322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.596359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.596578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.596822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.596851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.597016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.597300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.597328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.597583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.597811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.597840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.598137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.598375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.598405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.598714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.598998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.599007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.599128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.599306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.599315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.599562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.599671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.599679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.599906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.600298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.600777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.600983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.601092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.601262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.601271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.193 qpair failed and we were unable to recover it. 00:25:02.193 [2024-04-19 04:16:16.601432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.601636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.193 [2024-04-19 04:16:16.601645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.601876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.602357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.602630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.602845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.603046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.603273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.603302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.603536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.603738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.603768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.604088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.604256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.604265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.604495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.604751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.604780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.605022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.605313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.605349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.605658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.605939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.605948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.606147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.606341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.606377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.606596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.606862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.606891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.607137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.607375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.607406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.607632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.607893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.607922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.608218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.608513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.608543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.608762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.608984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.608993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.609105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.609456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.609733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.609997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.610214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.610398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.610428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.610704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.610925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.610954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.611121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.611332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.611370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.611674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.612002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.612031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.612328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.612555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.612584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.612862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.613292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.613638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.613831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.194 [2024-04-19 04:16:16.614040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.614294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.194 [2024-04-19 04:16:16.614324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.194 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.614542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.614840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.614869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.615109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.615323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.615359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.615658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.615858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.615895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.616071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.616260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.616289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.616516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.616740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.616770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.617044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.617289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.617298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.617485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.617662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.617671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.617843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.618325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.618659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.618955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.619112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.619270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.619299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.619468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.619607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.619637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.619935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.620144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.620174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.620309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.620586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.620615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.620836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.621106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.621141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.621278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.621489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.621519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.621793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.622059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.622088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.622390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.622633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.622642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.622812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.623267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.623710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.623940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.624287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.624444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.624473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.624675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.624848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.624876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.195 qpair failed and we were unable to recover it. 00:25:02.195 [2024-04-19 04:16:16.625229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.625498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.195 [2024-04-19 04:16:16.625529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.625831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.625972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.626006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.626277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.626506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.626536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.626810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.627241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.627738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.627904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.628048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.628347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.628795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.628990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.629201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.629400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.629431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.629662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.629875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.629904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.630136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.630283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.630317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.630515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.630732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.630770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.631028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.631280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.631309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.631601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.631826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.631855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.632143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.632338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.632362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.632645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.632917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.632934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.633067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.633341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.633380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.633651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.633949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.633978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.634198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.634425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.634455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.634673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.634815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.634844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.635148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.635405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.635427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.635699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.635884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.635900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.636123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.636420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.636449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.636734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.636948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.636964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.637242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.637515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.637531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.637801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.637989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.638006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.638249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.638433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.638450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.638573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.638676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.196 [2024-04-19 04:16:16.638685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.196 qpair failed and we were unable to recover it. 00:25:02.196 [2024-04-19 04:16:16.638915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.639112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.639121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.639297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.639468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.639498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.639849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.640242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.640703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.640907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.641207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.641508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.641538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.641756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.642013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.642042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.642358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.642644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.642653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.642821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.643189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.643580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.643756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.643986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.644128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.644157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.644445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.644596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.644605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.644791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.645026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.645055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.645359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.645556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.645585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.645873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.646254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.646731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.646980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.647138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.647331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.647370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.647534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.647738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.647766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.648087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.648210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.648219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.648473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.648670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.648699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.649003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.649238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.197 [2024-04-19 04:16:16.649267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.197 qpair failed and we were unable to recover it. 00:25:02.197 [2024-04-19 04:16:16.649420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.649610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.649639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.649789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.649999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.650028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.650296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.650504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.650534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.650740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.650940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.650969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.651101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.651230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.651257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.651497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.651647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.651675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.651924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.652164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.652172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.652455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.652685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.652693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.652940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.653333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.653734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.653851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.654106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.654399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.654689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.654816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.654985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.655424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.655727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.655834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.656062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.656230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.656239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.656334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.656568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.656577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.656753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.657205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.657637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.657873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.658166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.658370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.658379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.658552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.658724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.658733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.658965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.659221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.659230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.659431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.659606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.659615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.659786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.660040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.660049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.660213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.660327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.198 [2024-04-19 04:16:16.660336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.198 qpair failed and we were unable to recover it. 00:25:02.198 [2024-04-19 04:16:16.660423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.660581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.660589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.660781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.660954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.660962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.661121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.661269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.661278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.661469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.661664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.661673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.661853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.662228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.662540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.662849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.662961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.663191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.663292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.663301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.663533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.663837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.663847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.664107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.664283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.664292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.664455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.664688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.664696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.664945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.665292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.665606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.665781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.665950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.666269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.666610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.666792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.666882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.667230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.667607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.667784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.667961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.668337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.668608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.668797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.668958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.669422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.669718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.669892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.670012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.670103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.670113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.199 [2024-04-19 04:16:16.670218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.670390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.199 [2024-04-19 04:16:16.670399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.199 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.670681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.670910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.670919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.671112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.671400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.671688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.671891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.672048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.672424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.672794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.672985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.673086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.673373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.673632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.673830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.673933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.674296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.674677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.674892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.674992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.675418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.675771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.675961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.676205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.676361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.676370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.676478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.676584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.676593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.676852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.677197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.677567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.677750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.677982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.678152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.678160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.678285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.678479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.678489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.678743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.679004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.679013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.200 [2024-04-19 04:16:16.679181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.679373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.200 [2024-04-19 04:16:16.679385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.200 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.679588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.679775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.679784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.679903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.680335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.680704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.680920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.681032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.681135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.681144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.681324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.681574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.681583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.681834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.682089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.682430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.682815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.682985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.683143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.683316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.683326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.476 [2024-04-19 04:16:16.683557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.683797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.476 [2024-04-19 04:16:16.683806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.476 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.683983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.684304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.684729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.684852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.685049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.685312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.685321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.685504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.685694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.685703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.685868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.686152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.686350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.686626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.686882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.687043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.687245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.687653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.687911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.688087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.688348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.688619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.688882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.689011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.689300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.689566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.689735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.689986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.690256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.690561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.690786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.690949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.691176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.691284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.691293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.691536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.691608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.691616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.691786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.692012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.692021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.477 qpair failed and we were unable to recover it. 00:25:02.477 [2024-04-19 04:16:16.692201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.692309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.477 [2024-04-19 04:16:16.692318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.692549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.692723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.692733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.692962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.693316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.693676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.693856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.694121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.694291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.694299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.694404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.694563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.694572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.694733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.694991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.695200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.695514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.695868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.695997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.696157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.696257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.696267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.696475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.696733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.696742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.696905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.697243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.697788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.697904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.698136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.698345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.698355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.698471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.698687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.698696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.698903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.699295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.699564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.699831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.700002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.700258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.700267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.700454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.700622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.700631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.700899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.701197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.478 qpair failed and we were unable to recover it. 00:25:02.478 [2024-04-19 04:16:16.701484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.478 [2024-04-19 04:16:16.701565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.701751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.701942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.701951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.702065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.702569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.702811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.702992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.703226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.703416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.703425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.703623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.703727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.703736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.703843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.704172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.704445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.704656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.704772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.704888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.705178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.705463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.705725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.705916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.706246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.706476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.706661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.706827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.707118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.707315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.707644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.707745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.707924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.708378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.708753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.708940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.709047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.709238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.709246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.479 qpair failed and we were unable to recover it. 00:25:02.479 [2024-04-19 04:16:16.709414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.479 [2024-04-19 04:16:16.709489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.709497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.709696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.709867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.709876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.710077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.710243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.710253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.710369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.710456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.710464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.710704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.711001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.711030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.711250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.711520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.711549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.711754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.712041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.712050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.712336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.712614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.712643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.712780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.713376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.713769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.713987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.714283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.714431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.714460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.714625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.714892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.714921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.715083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.715372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.715403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.715605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.715746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.715774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.715911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.716204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.716593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.716758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.716962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.717201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.717230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.717585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.717891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.717899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.718103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.718320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.718356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.718474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.718673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.718701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.718862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.719129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.719139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.719328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.719596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.719626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.719903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.720196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.720225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.480 qpair failed and we were unable to recover it. 00:25:02.480 [2024-04-19 04:16:16.720463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.480 [2024-04-19 04:16:16.720609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.720638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.720932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.721104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.721112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.721349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.721524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.721533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.721817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.722221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.722648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.722890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.723048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.723262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.723271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.723521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.723702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.723731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.723961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.724334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.724742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.724924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.725205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.725393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.725402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.725602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.725874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.725883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.726060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.726230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.726239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.726511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.726706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.726715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.726904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.727223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.727595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.727773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.727980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.728237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.728246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.728437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.728636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.728645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.728846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.729053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.729062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.481 qpair failed and we were unable to recover it. 00:25:02.481 [2024-04-19 04:16:16.729172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.481 [2024-04-19 04:16:16.729394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.729404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.729606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.729808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.729819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.730113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.730491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.730821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.730956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.731155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.731391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.731401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.731574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.731748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.731757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.731875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.732218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.732578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.732874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.732998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.733172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.733428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.733438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.733637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.733815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.733824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.733935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.734351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.734723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.734922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.735152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.735306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.735316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.735516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.735744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.735753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.736017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.736217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.736247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.736403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.736682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.736711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.736862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.737265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.737613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.737851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.482 [2024-04-19 04:16:16.738012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.738118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.482 [2024-04-19 04:16:16.738127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.482 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.738383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.738494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.738503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.738674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.738854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.738862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.739061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.739248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.739277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.739448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.739593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.739622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.739778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.739974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.740003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.740162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.740373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.740382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.740539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.740698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.740706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.740894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.741205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.741537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.741718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.741969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.742201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.742516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.742796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.742929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.743273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.743572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.743694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.743922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.744257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.744538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.744831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.744927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.745034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.745367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.745797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.745922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.746025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.746330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.483 [2024-04-19 04:16:16.746763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.483 [2024-04-19 04:16:16.746941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.483 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.747097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.747247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.747276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.747478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.747685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.747714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.747849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.748386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.748754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.748921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.749080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.749311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.749320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.749501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.749728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.749738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.749973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.750254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.750515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.750984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.751150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.751413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.751422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.751616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.751771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.751779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.751883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.752205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.752478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.752739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.752911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.753026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.753446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.753792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.753958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.754052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.754224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.754234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.754522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.754734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.754744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.754914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.755328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.755703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.755955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.756128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.756355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.756365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.756606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.756861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.484 [2024-04-19 04:16:16.756872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.484 qpair failed and we were unable to recover it. 00:25:02.484 [2024-04-19 04:16:16.757030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.757335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.757634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.757916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.757997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.758187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.758541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.758895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.758994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.759159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.759264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.759272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.759453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.759619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.759628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.759884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.760264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.760684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.760886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.761003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.761301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.761672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.761780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.761955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.762165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.762503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.762766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.762923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.763124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.763443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.763679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.763884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.764300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.764600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.764839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.765118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.765299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.765308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.765481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.765647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.765657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.485 qpair failed and we were unable to recover it. 00:25:02.485 [2024-04-19 04:16:16.765815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.765994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.485 [2024-04-19 04:16:16.766002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.766232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.766496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.766526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.766766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.766979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.766988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.767182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.767415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.767445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.767761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.767981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.768010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.768283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.768434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.768464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.768694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.768990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.769018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.769227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.769428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.769459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.769677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.769838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.769866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.770023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.770265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.770294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.770450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.770715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.770744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.770964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.771192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.771200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.771431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.771534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.771543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.771779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.772079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.772108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.772337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.772642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.772672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.772825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.773022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.773051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.773272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.773566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.773596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.773802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.774068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.774097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.774411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.774624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.774653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.486 qpair failed and we were unable to recover it. 00:25:02.486 [2024-04-19 04:16:16.774930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.775223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.486 [2024-04-19 04:16:16.775232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.775461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.775696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.775725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.775929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.776457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.776734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.776994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.777198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.777378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.777407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.777624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.777909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.777939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.778269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.778575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.778584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.778758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.779039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.779068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.779381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.779649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.779679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.779980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.780271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.780300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.780468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.780791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.780820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.781043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.781293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.781321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.781491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.781761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.781789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.781947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.782216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.782244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.782518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.782765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.782794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.783039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.783284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.783312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.783539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.783750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.783779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.784011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.784205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.784233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.784442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.784738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.784767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.784991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.785194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.785222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.785515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.785749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.785758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.785929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.786084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.786092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.786369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.786575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.786604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.786903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.787301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.487 qpair failed and we were unable to recover it. 00:25:02.487 [2024-04-19 04:16:16.787610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.487 [2024-04-19 04:16:16.787721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.787820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.787910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.787919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.788042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.788153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.788162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.788349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.788534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.788564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.788771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.788980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.789009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.789220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.789421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.789451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.789746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.789950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.789979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.790255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.790430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.790439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.790609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.790802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.790810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.790993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.791253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.791282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.791507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.791665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.791694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.791941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.792305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.792633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.792883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.793156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.793360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.793389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.793604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.793868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.793897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.794138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.794371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.794401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.794675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.794881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.794910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.795155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.795412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.795421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.795677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.795855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.795863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.796110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.796324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.796362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.796514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.796729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.796758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.796890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.797454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.797746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.797954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.798309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.798501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.798510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-04-19 04:16:16.798629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.488 [2024-04-19 04:16:16.798743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.798752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.798931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.799060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.799069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.799303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.799611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.799640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.799909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.800213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.800242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.800511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.800743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.800772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.801097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.801367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.801397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.801619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.801899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.801927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.802218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.802331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.802340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.802473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.802720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.802729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.802917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.803172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.803180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.803355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.803563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.803593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.803908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.804069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.804097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.804313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.804629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.804660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.804880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.805312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.805650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.805937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.806209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.806422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.806451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.806698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.806915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.806944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.807131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.807397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.807427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.807725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.807934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.807963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.808236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.808480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.808490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.808725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.808957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.808987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.809130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.809276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.809304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.809611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.809850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.809885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.810030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.810311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.810353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.810571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.810760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.810789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-04-19 04:16:16.811032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.811246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.489 [2024-04-19 04:16:16.811274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.811438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.811690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.811700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.811819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.812192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.812594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.812891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.813165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.813313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.813365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.813575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.813772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.813801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.814073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.814289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.814329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.814425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.814700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.814708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.814824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.815057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.815086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.815293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.815589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.815620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.815840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.816034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.816043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.816306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.816608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.816638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.816869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.817082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.817111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.817376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.817664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.817694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.817901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.818224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.818253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.818420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.818665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.818694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.818898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.819373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.819740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.819978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.820140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.820298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.820307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.820534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.820807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.820836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.821059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.821247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.821276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.821548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.821749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.821777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.821925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.822165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.822194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.822409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.822707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.822736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.822905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.823058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.490 [2024-04-19 04:16:16.823096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-04-19 04:16:16.823325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.823494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.823528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.823743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.823944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.823972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.824267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.824449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.824484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.824600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.824846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.824855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.825053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.825232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.825260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.825389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.825612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.825642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.825938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.826208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.826237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.826537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.826756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.826785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.826947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.827216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.827246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.827518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.827662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.827691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.827860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.828184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.828620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.828917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.829150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.829286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.829294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.829550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.829710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.829718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.830030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.830288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.830297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.830473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.830712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.830741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.830991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.831311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.831339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.831562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.831867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.831896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.832069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.832304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.832333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.832563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.832746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.832775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.491 qpair failed and we were unable to recover it. 00:25:02.491 [2024-04-19 04:16:16.833055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.833320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.491 [2024-04-19 04:16:16.833356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.833566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.833766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.833794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.833949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.834366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.834813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.834979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.835184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.835314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.835322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.835572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.835835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.835863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.836014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.836282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.836311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.836589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.836678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.836686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.836857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.837299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.837704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.837888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.838110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.838406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.838436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.838651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.838934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.838963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.839303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.839526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.839535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.839701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.839892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.839921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.840137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.840462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.840472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.840729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.841289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.841639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.841879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.842158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.842427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.842457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.842664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.842822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.842851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.843072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.843284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.843313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.843608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.843782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.843791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.492 qpair failed and we were unable to recover it. 00:25:02.492 [2024-04-19 04:16:16.843895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.844053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.492 [2024-04-19 04:16:16.844062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.844322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.844539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.844569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.844874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.845260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.845621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.845737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.845906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.846185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.846610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.846838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.847133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.847340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.847352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.847578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.847755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.847784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.847997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.848210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.848239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.848509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.848808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.848837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.849118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.849413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.849422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.849660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.849782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.849810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.849957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.850227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.850256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.850479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.850568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.850577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.850808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.850992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.851020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.851171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.851412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.851442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.851659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.851888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.851916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.852134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.852356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.852386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.852588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.852855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.852884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.853037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.853326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.853375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.853544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.853805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.853833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.854133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.854465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.854735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.854860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.855058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.855258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.855287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.493 qpair failed and we were unable to recover it. 00:25:02.493 [2024-04-19 04:16:16.855512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.493 [2024-04-19 04:16:16.855649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.855677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.855878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.856274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.856635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.856819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.857000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.857275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.857304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.857544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.857757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.857786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.858023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.858298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.858326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.858547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.858663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.858672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.858908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.858994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.859003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.859191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.859356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.859365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.859528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.859652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.859661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.859934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.860096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.860105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.860394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.860605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.860634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.860878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.861112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.861140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.861355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.861622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.861652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.861787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.862307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.862731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.862973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.863175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.863332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.863371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.863592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.863806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.863836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.864137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.864333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.864372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.864665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.864920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.864929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.865138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.865367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.865376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.494 [2024-04-19 04:16:16.865480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.865648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.494 [2024-04-19 04:16:16.865657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.494 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.865819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.866300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.866837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.866962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.867064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.867269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.867299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.867449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.867738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.867767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.867911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.868210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.868239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.868454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.868685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.868713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.868919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.869190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.869227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.869484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.869739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.869747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.869927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.870108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.870137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.870382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.870599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.870627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.870903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.871174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.871203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.871413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.871592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.871621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.871770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.872041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.872070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.872347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.872464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.872473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.872707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.872991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.873019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.873293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.873510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.873540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.873861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.874081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.874110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.874328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.874632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.874661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.874846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.875140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.875169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.875447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.875596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.875625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.876305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.876690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.876877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.877138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.877374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.877404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.877590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.877861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.877891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.878142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.878363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.878393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.495 qpair failed and we were unable to recover it. 00:25:02.495 [2024-04-19 04:16:16.878563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.495 [2024-04-19 04:16:16.878771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.878799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.878953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.879166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.879195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.879403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.879697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.879726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.879944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.880146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.880174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.880456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.880724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.880753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.880957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.881254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.881283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.881584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.881804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.881833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.881980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.882223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.882252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.882552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.882752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.882787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.882941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.883316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.883631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.883856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.884075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.884272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.884301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.884545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.884744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.884753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.884946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.885158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.885186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.885406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.885643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.885671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.885889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.886316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.886634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.886976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.887268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.887430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.887439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.887637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.887727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.887735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.887990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.888368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.888802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.888992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.889208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.889450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.889479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.496 qpair failed and we were unable to recover it. 00:25:02.496 [2024-04-19 04:16:16.889685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.889885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.496 [2024-04-19 04:16:16.889915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.890114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.890244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.890273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.890483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.890569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.890577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.890806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.890990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.891024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.891242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.891382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.891391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.891659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.891805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.891833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.892037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.892397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.892811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.892977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.893145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.893466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.893508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.893735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.893920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.893948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.894275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.894489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.894519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.894745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.895250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.895802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.895975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.896212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.896487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.896517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.896787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.896969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.896997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.897198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.897324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.897332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.897608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.897808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.897837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.898087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.898302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.898311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.898441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.898615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.898624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.898882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.899156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.899456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.497 [2024-04-19 04:16:16.899821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.497 [2024-04-19 04:16:16.899897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.497 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.900126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.900318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.900355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.900559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.900853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.900882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.901125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.901336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.901392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.901554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.901837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.901866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.902076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.902373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.902403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.902637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.902799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.902808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.902979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.903339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.903691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.903882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.904074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.904360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.904713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.904882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.905089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.905312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.905522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.905816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.905928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.906305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.906574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.906815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.907055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.907521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.907831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.907959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.908133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.908389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.908399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.908628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.908729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.908738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.908861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.909397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.498 [2024-04-19 04:16:16.909628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.498 [2024-04-19 04:16:16.909745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.498 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.909844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.910293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.910580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.910815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.911070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.911361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.911673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.911840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.912084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.912259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.912268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.912444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.912626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.912635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.912891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.913283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.913573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.913753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.913934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.914283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.914642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.914820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.915049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.915237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.915246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.915477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.915575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.915584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.915752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.915992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.916000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.916088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.916339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.916351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.916606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.916796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.916804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.917058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.917236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.917244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.917453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.917651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.917660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.917838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.918240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.918562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.918846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.918967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.919124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.919322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.919331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.499 [2024-04-19 04:16:16.919499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.919689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.499 [2024-04-19 04:16:16.919698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.499 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.919957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.920316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.920753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.920951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.921051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.921208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.921217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.921479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.921736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.921744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.922024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.922356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.922866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.922979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.923145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.923307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.923316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.923568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.923774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.923783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.924010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.924431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.924725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.924911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.925170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.925468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.925838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.925964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.926129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.926359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.926368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.926623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.926725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.926734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.926840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.927196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.927750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.927925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.928152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.928270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.928279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.928396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.928620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.928630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.928834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.929086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.929095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.929193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.929301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.929309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.500 qpair failed and we were unable to recover it. 00:25:02.500 [2024-04-19 04:16:16.929540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.500 [2024-04-19 04:16:16.929651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.929660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.929821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.930178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.930486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.930703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.930965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.931089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.931445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.931740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.931855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.932014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.932472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.932832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.932960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.933118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.933383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.933699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.933902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.934059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.934246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.934255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.934507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.934772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.934781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.934890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.935301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.935540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.935751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.935982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.936460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.936749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.936944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.937139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.937310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.937320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.501 [2024-04-19 04:16:16.937492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.937608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.501 [2024-04-19 04:16:16.937617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.501 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.937884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.938244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.938647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.938823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.938986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.939272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.939547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.939848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.939958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.940115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.940382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.940687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.940856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.941015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.941430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.941649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.941848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.941952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.942289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.942654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.942919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.943035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.943382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.502 [2024-04-19 04:16:16.943668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.502 [2024-04-19 04:16:16.943787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.502 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.943990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.944284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.944610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.944791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.944894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.945107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.945482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.945648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.945890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.946368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.946659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.946875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.946974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.947323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.947652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.947836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.948002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.948254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.948263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.948513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.948741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.948750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.949007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.949449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.949806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.949970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.950224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.950451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.950460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.950583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.950744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.950756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.950928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.951180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.951190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.951385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.951544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.951553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.951783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.952215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.952555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.952874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.952984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.953238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.953326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.953335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.503 [2024-04-19 04:16:16.953573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.953829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.503 [2024-04-19 04:16:16.953838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.503 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.953995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.954374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.954738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.954908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.955079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.955345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.955354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.955554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.955751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.955760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.955921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.956291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.956575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.956840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.956945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.957161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.957600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.957720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.957897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.958294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.958571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.958682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.958858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.959271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.959555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.959735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.959910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.960350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.960674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.960809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.960900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.961259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.961542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.961720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.961879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.962286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.962688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.504 [2024-04-19 04:16:16.962799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.504 qpair failed and we were unable to recover it. 00:25:02.504 [2024-04-19 04:16:16.962961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.963308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.963786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.963915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.964019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.964486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.964865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.964980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.965157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.965320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.965328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.965526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.965762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.965771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.966196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.966205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.966457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.966713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.966722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.966834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.966996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.967005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.967258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.967485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.967493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.967665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.967847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.967856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.967962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.968301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.968649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.968935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.969113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.969356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.969365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.969470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.969740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.969749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.969979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.970445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.970788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.970915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.971167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.971367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.971376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.971553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.971807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.971816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.972045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.972271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.972280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.505 [2024-04-19 04:16:16.972460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.972650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.505 [2024-04-19 04:16:16.972659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.505 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.972829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.973259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.973545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.973831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.973993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.974112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.974401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.974826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.974931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.975427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.975713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.975827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.976005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.976299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.976649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.976939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.977135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.977296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.977305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.977535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.977713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.977722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.977917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.978364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.978670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.978933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.979106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.979267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.979275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.979445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.979640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.979649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.979877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.980227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.980713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.980883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.980973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.981069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.981078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.506 qpair failed and we were unable to recover it. 00:25:02.506 [2024-04-19 04:16:16.981242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.506 [2024-04-19 04:16:16.981335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.981348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.981507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.981708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.981717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.981837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.982366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.982652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.982749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.982930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.983452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.983755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.983954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.984062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.984521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.984796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.984890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.985055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.985400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.985639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.985901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.986007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.986116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.986125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.507 [2024-04-19 04:16:16.986304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.986406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.507 [2024-04-19 04:16:16.986416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.507 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.986648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.986760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.986769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.986959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.987195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.987408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.987700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.987906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.988015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.988289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.988711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.988903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.989232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.989423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.989432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.989604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.989789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.989799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.989898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.990317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.783 [2024-04-19 04:16:16.990530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.783 [2024-04-19 04:16:16.990712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.783 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.990808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.991240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.991612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.991801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.991921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.992296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.992692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.992820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.993018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.993421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.993755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.993987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.994175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.994355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.994372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.994644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.994887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.994904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.995042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.995255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.995271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.995544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.995670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.995684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.995954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.996363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.996721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.996833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.997004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.997294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.997650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.997779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.997979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.998434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.998715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.998981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.999085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.999481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:16.999780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:16.999893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:17.000150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:17.000435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:17.000798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.000975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.784 qpair failed and we were unable to recover it. 00:25:02.784 [2024-04-19 04:16:17.001079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.784 [2024-04-19 04:16:17.001178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.001190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.001366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.001541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.001550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.001647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.001821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.001830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.002010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.002362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.002725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.002838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.003008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.003287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.003627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.003743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.003917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.004334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.004634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.004756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.004936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.005261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.005602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.005723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.005886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.006233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.006546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.006734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.006908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.007198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.007597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.007707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.007853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.008200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.008524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.008701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.008860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.009196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.009705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.009828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.009999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.010109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.010118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.785 qpair failed and we were unable to recover it. 00:25:02.785 [2024-04-19 04:16:17.010229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.010426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.785 [2024-04-19 04:16:17.010435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.010549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.010721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.010730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.010836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.011213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.011494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.011887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.011993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.012101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.012392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.012769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.012935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.013037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.013334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.013577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.013703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.013867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.014259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.014554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.014730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.014844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.015306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.015750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.015916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.016094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.016367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.016675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.016938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.017198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.017481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.017707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.017912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.018088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.018417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.018637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.018825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.019081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.019252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.019261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.019459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.019629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.019638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.786 qpair failed and we were unable to recover it. 00:25:02.786 [2024-04-19 04:16:17.019840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.786 [2024-04-19 04:16:17.020121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.020130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.020239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.020468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.020477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.020653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.020760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.020769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.020999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.021227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.021236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.021487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.021693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.021702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.021868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.022228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.022596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.022865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.022981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.023233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.023494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.023502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.023691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.023854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.023863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.024032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.024380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.024760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.024972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.025083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.025361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.025370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.025573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.025686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.025695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.025788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.026195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.026554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.026666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.026828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.027121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.027433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.027862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.027985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.028153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.028391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.028400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.028589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.028703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.028711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.028965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.029250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.029537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.029776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.787 [2024-04-19 04:16:17.029975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.030076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.787 [2024-04-19 04:16:17.030085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.787 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.030315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.030437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.030445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.030641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.030744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.030753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.030982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.031297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.031668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.031831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.031995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.032329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.032632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.032867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.033065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.033257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.033557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.033737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.033843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.034267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.034763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.034943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.035080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.035228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.035256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.035464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.035731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.035740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.035910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.036277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.036574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.036750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.036859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.037244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.037568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.037683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.037848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.038048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.788 [2024-04-19 04:16:17.038058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.788 qpair failed and we were unable to recover it. 00:25:02.788 [2024-04-19 04:16:17.038316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.038730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.038761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.039062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.039333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.039375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.039667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.039918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.039926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.040105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.040376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.040583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.040804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.040833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.041080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.041243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.041272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.041545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.041695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.041704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.041807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.042264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.042629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.042748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.042872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.043183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.043534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.043851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.044102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.044392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.044422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.044640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.044897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.044926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.045175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.045511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.045542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.045792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.046184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.046664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.046896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.047031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.047334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.047374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.047602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.047831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.047860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.048084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.048296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.048324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.048581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.048737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.048765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.049011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.049305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.049334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.049621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.049841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.049870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.050141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.050308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.050337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.050639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.050892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.050900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.051129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.051254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.051283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.789 [2024-04-19 04:16:17.051491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.051689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.789 [2024-04-19 04:16:17.051717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.789 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.052033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.052324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.052360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.052582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.052786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.052815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.053086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.053245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.053253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.053455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.053686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.053695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.053927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.054323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.054728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.054933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.055058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.055316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.055693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.055801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.055972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.056233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.056261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.056435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.056663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.056692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.056892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.057328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.057723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.057993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.058245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.058464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.058493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.058684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.058886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.058895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.059095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.059454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.059784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.059968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.060087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.060389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.060673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.060787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.061017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.061204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.061238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.061445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.061712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.061741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.061928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.062251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.062603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.062919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.790 qpair failed and we were unable to recover it. 00:25:02.790 [2024-04-19 04:16:17.063080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.790 [2024-04-19 04:16:17.063268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.063296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.063454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.063729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.063759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.063999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.064229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.064258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.064559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.064793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.064821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.064971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.065196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.065225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.065433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.065637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.065671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.065877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.066157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.066185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.066476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.066804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.066833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.067055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.067508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.067888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.067982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.068217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.068513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.068548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.068705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.068965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.068994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.069146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.069357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.069387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.069599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.069737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.069765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.070044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.070242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.070281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.070555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.070750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.070759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.070935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.071360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.071803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.071975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.072179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.072385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.072415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.072715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.072853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.072881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.073220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.073366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.073396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.073548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.073821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.073830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.074104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.074315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.074669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.074880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.075063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.075206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.075234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.075478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.075702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.791 [2024-04-19 04:16:17.075731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.791 qpair failed and we were unable to recover it. 00:25:02.791 [2024-04-19 04:16:17.075968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.076213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.076221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.076369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.076613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.076643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.076860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.077382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.077827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.077903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.078131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.078386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.078417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.078696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.078992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.079021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.079302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.079530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.079559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.079720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.079987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.080016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.080313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.080533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.080563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.080770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.080938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.080947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.081209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.081448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.081477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.081704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.081832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.081841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.082000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.082122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.082150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.082285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.082556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.082586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.082860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.083209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.083555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.083743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.083973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.084271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.084571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.084777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.084929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.085162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.085396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.085425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.085627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.085757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.085786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.792 qpair failed and we were unable to recover it. 00:25:02.792 [2024-04-19 04:16:17.086085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.792 [2024-04-19 04:16:17.086192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.086287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.086620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.086848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.086966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.087075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.087387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.087784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.087906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.088071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.088360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.088549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.088739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.088875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.089055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.089260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.089289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.089431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.089751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.089779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.090084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.090407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.090727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.090889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.091150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.091318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.091328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.091445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.091557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.091566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.091757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.091992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.092000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.092171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.092371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.092381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.092543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.092616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.092625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.092793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.093247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.093620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.093960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.094180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.094368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.094397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.094671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.094869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.094898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.095115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.095316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.095352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.095655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.095856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.095885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.793 [2024-04-19 04:16:17.096114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.096396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.793 [2024-04-19 04:16:17.096426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.793 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.096573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.096696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.096723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.096869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.097162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.097171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.097405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.097603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.097630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.097784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.098282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.098696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.098808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.098972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.099265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.099664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.099784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.099945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.100408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.100841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.100965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.101069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.101316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.101575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.101747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.101924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.102203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.102561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.102731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.103022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.103157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.103186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.103418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.103577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.103606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.103828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.104141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.104168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.104377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.104605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.104633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.104833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.105337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.105743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.105983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.106205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.106442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.106473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.106698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.106914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.106943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.107178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.107337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.107374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.794 qpair failed and we were unable to recover it. 00:25:02.794 [2024-04-19 04:16:17.107607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.107840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.794 [2024-04-19 04:16:17.107869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.108005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.108210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.108239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.108390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.108518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.108547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.108822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.109355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.109803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.109997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.110212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.110514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.110543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.110836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.111325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.111663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.111783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.111877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.112293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.112645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.112856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.113020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.113241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.113270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.113411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.113627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.113656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.113947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.114217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.114246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.114450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.114687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.114716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.114875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.115219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.115701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.115881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.116099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.116282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.116291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.116468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.116735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.116764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.117038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.117178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.117206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.117423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.117620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.117649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.117921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.118155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.118184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.118414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.118615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.118644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.118881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.118996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.119005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.119266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.119441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.119450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.119540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.119646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.119655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.795 qpair failed and we were unable to recover it. 00:25:02.795 [2024-04-19 04:16:17.119833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.795 [2024-04-19 04:16:17.120112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.120141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.120428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.120648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.120676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.120893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.121360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.121743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.121976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.122204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.122404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.122433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.122654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.122874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.122904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.123048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.123185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.123195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.123390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.123559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.123568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.123776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.123977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.124005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.124156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.124442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.124472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.124615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.124848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.124877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.125080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.125237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.125278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.125429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.125576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.125604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.125817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.126038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.126067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.126368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.126586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.126614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.126808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.127277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.127755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.127990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.128212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.128501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.128531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.128738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.128981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.129009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.129234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.129452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.129481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.129682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.129917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.129947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.130166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.130443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.130473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.130751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.131262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.131664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.131896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.132195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.132469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.132480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.132762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.132874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.796 [2024-04-19 04:16:17.132883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.796 qpair failed and we were unable to recover it. 00:25:02.796 [2024-04-19 04:16:17.133086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.133300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.133328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.133502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.133657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.133686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.133839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.133985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.134014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.134232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.134374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.134403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.134624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.134763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.134791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.135036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.135305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.135334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.135570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.135767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.135795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.136076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.136305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.136314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.136598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.136706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.136715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.136949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.137164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.137192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.137440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.137599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.137627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.137833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.137985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.138014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.138148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.138264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.138283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.138451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.138621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.138630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.138853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.139148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.139177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.139310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.139588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.139608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.139841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.140239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.140786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.140964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.141211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.141389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.141398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.141653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.141824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.141832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.141992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.142301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.142712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.142916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.797 qpair failed and we were unable to recover it. 00:25:02.797 [2024-04-19 04:16:17.143118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.797 [2024-04-19 04:16:17.143361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.143391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.143612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.143829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.143858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.144074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.144327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.144758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.144912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.145147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.145324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.145363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.145515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.145736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.145765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.145986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.146191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.146218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.146362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.146593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.146622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.146895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.147244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.147599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.147788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.148036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.148304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.148333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.148499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.148702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.148729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.149028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.149314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.149349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.149579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.149812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.149841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.149987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.150286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.150315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.150473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.150631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.150660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.150812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.151265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.151721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.151949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.152257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.152538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.152548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.152831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.153173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.153646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.153827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.154028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.154661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.154899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.155073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.155174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.798 [2024-04-19 04:16:17.155182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.798 qpair failed and we were unable to recover it. 00:25:02.798 [2024-04-19 04:16:17.155341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.155448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.155458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.155624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.155716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.155726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.155883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.156175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.156516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.156759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.156949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.157247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.157582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.157697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.157883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.158216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.158555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.158818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.159071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.159314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.159323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.159492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.159722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.159732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.159902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.160269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.160674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.160763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.160926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.161212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.161445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.161660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.161838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.161997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.162217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.162576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.162778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.162950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.163328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.163603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.163703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.163809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.164174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.164399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.799 [2024-04-19 04:16:17.164577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.799 qpair failed and we were unable to recover it. 00:25:02.799 [2024-04-19 04:16:17.164660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.164773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.164782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.164899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.165405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.165793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.165963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.166152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.166309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.166317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.166552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.166642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.166651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.166899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.167255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.167641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.167754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.167931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.168372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.168732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.168898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.169175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.169278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.169287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.169531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.169702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.169710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.169872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.170304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.170668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.170867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.171033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.171454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.171748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.171912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.172085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.172317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.172326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.172431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.172633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.172642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.172814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.173174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.173573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.173819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.173937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.174161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.174459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.174672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.800 [2024-04-19 04:16:17.174841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.175005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.800 [2024-04-19 04:16:17.175013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.800 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.175269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.175471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.175480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.175644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.175812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.175821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.176060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.176265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.176273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.176503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.176610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.176619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.176726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.177215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.177658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.177824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.177941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.178213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.178613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.178716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.178900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.179166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.179505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.179696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.179883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.180138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.180390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.180623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.180889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.181079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.181463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.181829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.181939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.182064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.182302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.182802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.182914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.183144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.183236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.183244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.801 qpair failed and we were unable to recover it. 00:25:02.801 [2024-04-19 04:16:17.183423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.801 [2024-04-19 04:16:17.183630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.183638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.183813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.183913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.183922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.184112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.184339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.184353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.184516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.184698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.184706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.184879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.185073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.185085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.185322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.185577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.185586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.185862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.186293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.186601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.186799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.186907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.187143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.187574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.187758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.187869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.188145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.188618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.188903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.188999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.189161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.189328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.189336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.189570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.189829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.189839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.190013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.190278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.190712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.190825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.191005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.191236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.191244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.191469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.191726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.191734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.191905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.192280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.192635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.192751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.193005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.193447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.193800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.802 [2024-04-19 04:16:17.193908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.802 qpair failed and we were unable to recover it. 00:25:02.802 [2024-04-19 04:16:17.194137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.194372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.194735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.194901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.195026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.195286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.195578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.195661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.195832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.196152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.196507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.196679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.196774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.197201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.197531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.197715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.197925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.198361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.198735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.198850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.199107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.199495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.199855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.199956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.200077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.200376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.200803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.200903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.201082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.201465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.201854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.201985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.202169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.202292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.202302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.202465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.202568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.202577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.202761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.203139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.803 [2024-04-19 04:16:17.203498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.803 [2024-04-19 04:16:17.203790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.803 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.203879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.204312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.204630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.204732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.205004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.205267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.205276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.205534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.205709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.205717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.205892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.206203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.206671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.206862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.207021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.207366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.207766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.207848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.208087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.208353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.208362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.208586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.208705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.208713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.208891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.209260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.209467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.209715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.209927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.210276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.210712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.210925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.211091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.211386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.211814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.211922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.212153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.212426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.212435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.212543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.212660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.212669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.212845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.213193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.213497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.213834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.213951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.804 qpair failed and we were unable to recover it. 00:25:02.804 [2024-04-19 04:16:17.214184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.214344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.804 [2024-04-19 04:16:17.214353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.214527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.214648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.214656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.214850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.215183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.215496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.215734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.215930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.216097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.216468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.216710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.216825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.216925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.217149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.217159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.217426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.217662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.217671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.217873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.218227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.218631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.218823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.219065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.219429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.219708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.219805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.219893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.220189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.220531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.220807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.220903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.221067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.221350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.221360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.221660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.221815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.221824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.221981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.222394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.222690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.222857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.222965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.223216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.223642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.223843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.805 qpair failed and we were unable to recover it. 00:25:02.805 [2024-04-19 04:16:17.223966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.805 [2024-04-19 04:16:17.224134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.224144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.224335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.224513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.224522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.224644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.224735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.224744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.224909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.225258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.225543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.225739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.225904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.226241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.226641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.226877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.226991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.227103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.227467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.227735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.227843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.227947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.228219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.228518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.228789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.228883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.229024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.229230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.229239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.229511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.229782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.229791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.230057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.230299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.230307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.230538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.230721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.230730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.230979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.231187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.231196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.231359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.231643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.231651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.231878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.232130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.232140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.232307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.232548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.806 [2024-04-19 04:16:17.232558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.806 qpair failed and we were unable to recover it. 00:25:02.806 [2024-04-19 04:16:17.232836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.233089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.233098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.233374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.233604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.233613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.233870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.234201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.234210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.234389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.234571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.234580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.234770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.235001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.235030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.235318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.235478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.235507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.235808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.236131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.236160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.236371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.236641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.236670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.236892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.237121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.237149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.237303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.237607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.237616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.237845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.238267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.238666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.238853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.239124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.239324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.239332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.239596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.239821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.239831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.240126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.240313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.240324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.240512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.240791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.240799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.241045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.241305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.241314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.241573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.241801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.241810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.242051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.242221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.242230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.242481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.242664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.242674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.242907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.243174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.243183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.243359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.243569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.243578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.243835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.244290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.244689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.244968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.245224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.245484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.245494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.245723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.245997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.246006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.807 [2024-04-19 04:16:17.246196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.246495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.807 [2024-04-19 04:16:17.246504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.807 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.246761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.247202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.247601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.247890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.248146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.248321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.248330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.248494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.248750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.248759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.248963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.249079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.249087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.249246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.249532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.249543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.249802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.250065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.250074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.250307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.250608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.250617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.250860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.251114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.251123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.251323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.251523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.251532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.251786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.252271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.252718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.252981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.253256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.253492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.253501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.253755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.253932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.253941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.254168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.254469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.254480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.254717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.254896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.254905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.255189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.255448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.255457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.255743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.255964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.255993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.256293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.256626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.256655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.256890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.257204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.257212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.257431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.257643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.257672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.257958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.258232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.258241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.258426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.258685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.258714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.259006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.259286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.259314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.259505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2cb00 is same with the state(5) to be set 00:25:02.808 [2024-04-19 04:16:17.259908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.260310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.260373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.260649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.260899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.808 [2024-04-19 04:16:17.260916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.808 qpair failed and we were unable to recover it. 00:25:02.808 [2024-04-19 04:16:17.261193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.261477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.261494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.261757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.262000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.262009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.262189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.262378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.262408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.262736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.263002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.263031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.263332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.263630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.263659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.263934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.264225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.264254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.264541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.264802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.264831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.265002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.265323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.265359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.265634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.265792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.265820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.266092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.266328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.266367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.266587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.266878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.266906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.267181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.267511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.267520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.267725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.268007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.268015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.268285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.268495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.268525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.268825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.269118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.269147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.269479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.269706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.269734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.270026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.270241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.270270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.270542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.270719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.270747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.270953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.271193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.271227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.271420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.271676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.271684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.271894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.272195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.272223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.272551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.272776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.272805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.273098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.273425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.273455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.273698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.273968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.273996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.274301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.274421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.274430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.274644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.274858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.274886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.275106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.275314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.275323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.275550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.275717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.275725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.809 [2024-04-19 04:16:17.275989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.276272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.809 [2024-04-19 04:16:17.276305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.809 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.276633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.276931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.276960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.277288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.277585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.277614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.277887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.278158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.278186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.278429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.278697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.278706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.278883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.279144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.279173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.279483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.279695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.279724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.279886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.280152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.280181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.280463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.280763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.280792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.280951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.281245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.281274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.281570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.281730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.281741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.282014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.282271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.282279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.282529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.282789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.282798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.283054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.283313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.283322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.283601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.283831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.283840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.284096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.284265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.284275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.284504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.284760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.284769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.284964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.285216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.285225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.285483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.285732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.285740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.285995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.286177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.286185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.286383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.286642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.286652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.286889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.287308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.287730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.287919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.288159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.288333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.288345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.288520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.288808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.288817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.288997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.289155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.289164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.810 qpair failed and we were unable to recover it. 00:25:02.810 [2024-04-19 04:16:17.289357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.810 [2024-04-19 04:16:17.289529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.289538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.289794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.290045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.290054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.290308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.290572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.290581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.290834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.291015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.291031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.291302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.291563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.291573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.291808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.292086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.292115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.292391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.292739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.292767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.293051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.293330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.293346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.293630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.293876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.293885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.294126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.294410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.294419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.294731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.294879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.294908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:02.811 [2024-04-19 04:16:17.295211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.295544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.811 [2024-04-19 04:16:17.295556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:02.811 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.295853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.296236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.296688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.296961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.297188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.297381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.297390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.297487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.297759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.297768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.297940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.298184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.298193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.083 [2024-04-19 04:16:17.298365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.298535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.083 [2024-04-19 04:16:17.298544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.083 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.298825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.299268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.299639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.299901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.300138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.300414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.300444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.300747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.301073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.301102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.301324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.301627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.301656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.301936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.302282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.302310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.302538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.302759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.302788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.303025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.303299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.303328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.303966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.303994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.304320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.304570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.304599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.304922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.305219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.305248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.305575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.305875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.305904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.306130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.306425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.306455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.306778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.307074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.307083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.307315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.307567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.307577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.307863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.308350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.308795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.308998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.309183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.309375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.309392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.309661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.309950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.309959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.310162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.310368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.310397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.310665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.310940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.310968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.311272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.311547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.311577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.311919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.312211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.312247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.312514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.312749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.312758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.313025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.313295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.313324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.313566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.313854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.084 [2024-04-19 04:16:17.313882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.084 qpair failed and we were unable to recover it. 00:25:03.084 [2024-04-19 04:16:17.314175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.314442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.314472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.314689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.314985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.315013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.315224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.315551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.315581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.315830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.316330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.316795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.316924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.317097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.317356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.317365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.317611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.317893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.317901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.318138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.318321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.318360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.318674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.318917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.318945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.319149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.319420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.319429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.319606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.319873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.319902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.320227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.320527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.320556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.320883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.321148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.321176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.321461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.321617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.321646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.321944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.322144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.322173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.322313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.322631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.322660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.322917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.323209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.323238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.323567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.323884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.323913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.324230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.324441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.324470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.324767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.325035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.325063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.325268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.325541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.325571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.325870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.326137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.326166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.326479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.326655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.326663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.326829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.327015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.327023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.327259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.327531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.327561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.327834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.328129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.328163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.328345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.328610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.328638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.085 qpair failed and we were unable to recover it. 00:25:03.085 [2024-04-19 04:16:17.328885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.329131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.085 [2024-04-19 04:16:17.329160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.329465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.329676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.329705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.330032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.330338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.330377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.330696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.330914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.330942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.331159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.331454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.331484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.331689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.331990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.332018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.332333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.332629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.332638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.332923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.333182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.333191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.333306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.333483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.333492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.333759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.334054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.334082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.334294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.334588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.334617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.334937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.335192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.335201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.335416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.335706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.335734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.335895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.336093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.336122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.336415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.336700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.336709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.336996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.337277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.337306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.337579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.337743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.337771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.338072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.338405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.338414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.338736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.339004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.339033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.339333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.339669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.339706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.339986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.340299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.340349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.340617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.340734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.340743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.340948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.341109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.341118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.341399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.341654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.341662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.341823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.342053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.342082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.342371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.342696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.342724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.343006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.343274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.343303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.343658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.343940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.343969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.086 [2024-04-19 04:16:17.344266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.344588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.086 [2024-04-19 04:16:17.344618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.086 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.344922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.345079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.345108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.345438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.345659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.345688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.345906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.346201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.346230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.346567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.346785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.346793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.346969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.347144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.347153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.347338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.347600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.347609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.347869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.348042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.348051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.348248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.348562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.348592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.348732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.349030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.349058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.349274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.349585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.349615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.349814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.350080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.350109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.350381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.350608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.350637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.350928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.351127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.351155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.351457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.351741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.351769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.352085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.352288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.352317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.352595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.352790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.352819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.353122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.353448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.353478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.353783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.354108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.354435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.354661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.354689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.355011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.355318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.355354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.355517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.355797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.355808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.356091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.356372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.356381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.356591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.356798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.356826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.357100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.357311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.357340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.357653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.357823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.357832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.358089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.358295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.358303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.358545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.358650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.358659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.087 [2024-04-19 04:16:17.358905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.359095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.087 [2024-04-19 04:16:17.359104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.087 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.359274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.359446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.359455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.359667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.359815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.359844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.360162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.360486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.360522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.360857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.361239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.361706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.361962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.362185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.362485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.362516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.362849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.363151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.363180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.363508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.363809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.363838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.364161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.364466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.364496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.364712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.365186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.365660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.365900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.366107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.366291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.366320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.366629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.366955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.366984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.367258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.367487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.367517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.367723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.367963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.367992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.368246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.368514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.368523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.368705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.368868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.368877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.368995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.369284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.369313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.369651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.369945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.088 [2024-04-19 04:16:17.369953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.088 qpair failed and we were unable to recover it. 00:25:03.088 [2024-04-19 04:16:17.370213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.370447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.370477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.370772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.371312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.371648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.371931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.372217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.372514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.372544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.372662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.372826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.372834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.373100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.373284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.373293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.373476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.373572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.373581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.373811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.374347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.374782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.374984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.375288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.375587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.375614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.375882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.376113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.376122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.376354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.376647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.376676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.376952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.377251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.377280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.377619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.377866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.377894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.378144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.378351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.378381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.378664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.378868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.378897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.379052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.379326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.379366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.379574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.379844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.379873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.380182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.380494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.380503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.380759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.381037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.381066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.381426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.381561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.381590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.381807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.382061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.382089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.382306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.382609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.382639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.382855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.383162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.383191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.383521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.383826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.383855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.089 qpair failed and we were unable to recover it. 00:25:03.089 [2024-04-19 04:16:17.384186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.384410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.089 [2024-04-19 04:16:17.384440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.384745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.385017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.385045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.385406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.385710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.385738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.385965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.386268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.386297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.386613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.386839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.386869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.387208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.387427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.387458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.387759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.388275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.388707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.388891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.389067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.389321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.389330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.389596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.389857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.389866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.390094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.390383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.390414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.390622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.390901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.390929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.391237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.391558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.391600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.391905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.392229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.392258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.392571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.392910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.392939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.393241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.393567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.393598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.393902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.394354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.394668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.394889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.395194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.395463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.395472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.395679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.395943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.395972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.396262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.396613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.396644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.396927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.397151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.397179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.397467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.397742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.397751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.397999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.398285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.398294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.398494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.398784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.090 [2024-04-19 04:16:17.398813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.090 qpair failed and we were unable to recover it. 00:25:03.090 [2024-04-19 04:16:17.399064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.399366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.399397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.399567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.399775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.399803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.400109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.400390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.400420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.400760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.401057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.401086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.401420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.401719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.401748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.402087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.402383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.402392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.402583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.402814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.402823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.403027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.403289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.403298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.403562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.403678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.403688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.403881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.404155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.404164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.404431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.404694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.404703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.404899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.405357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.405812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.405987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.406523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.406532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.406774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.407039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.407048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.407289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.407559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.407568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.407806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.408071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.408080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.408301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.408512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.408522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.408790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.409345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.409611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.409861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.410043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.410293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.410302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.410504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.410768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.410778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.411064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.411268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.411277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.411524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.411758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.411767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.412052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.412304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.412313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.412495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.412758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.412767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.091 qpair failed and we were unable to recover it. 00:25:03.091 [2024-04-19 04:16:17.413030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.413211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.091 [2024-04-19 04:16:17.413220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.413458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.413647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.413657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.413822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.414062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.414072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.414339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.414609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.414618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.414803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.415060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.415069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.415307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.415519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.415528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.415763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.416024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.416033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.416213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.416500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.416509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.416802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.417188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.417690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.417894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.418091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.418326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.418335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.418555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.418739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.418748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.418998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.419287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.419296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.419463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.419699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.419708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.419944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.420130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.420139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.420397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.420607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.420616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.420852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.421261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.421774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.421991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.422201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.422456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.422465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.422719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.422986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.422995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.423254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.423424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.423433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.423697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.423959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.423968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.424162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.424400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.424409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.424646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.424883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.424892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.425185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.425450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.425459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.092 qpair failed and we were unable to recover it. 00:25:03.092 [2024-04-19 04:16:17.425628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.092 [2024-04-19 04:16:17.425811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.425820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.426084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.426277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.426286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.426503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.426753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.426762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.427015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.427202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.427212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.427477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.427783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.427792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.428058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.428217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.428227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.428461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.428753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.428762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.428928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.429096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.429105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.429365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.429554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.429563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.429797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.430056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.430065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.430238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.430502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.430512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.430726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.431258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.431611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.431730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.431991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.432196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.432205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.432372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.432570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.432580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.432835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.433069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.433079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.433269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.433589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.433599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.433774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.433995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.434004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.434233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.434442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.434452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.434655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.434925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.434936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.435227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.435489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.435500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.435760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.436033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.436042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.436152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.436334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.436350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.093 qpair failed and we were unable to recover it. 00:25:03.093 [2024-04-19 04:16:17.436665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.093 [2024-04-19 04:16:17.436830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.436840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.437029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.437262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.437271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.437529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.437720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.437729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.437844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.438120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.438129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.438379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.438609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.438618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.438748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.439175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.439643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.439913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.440108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.440302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.440311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.440531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.440726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.440737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.440957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.441225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.441234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.441418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.441611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.441621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.441820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.442310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.442742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.442936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.443229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.443358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.443368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.443535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.443651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.443660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.443897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.444355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.444666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.444913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.445098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.445361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.445371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.445641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.445877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.445886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.446120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.446405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.446415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.446663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.446917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.446926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.447136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.447396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.447405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.447639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.447839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.447848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.448132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.448308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.448317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.094 qpair failed and we were unable to recover it. 00:25:03.094 [2024-04-19 04:16:17.448555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.448738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.094 [2024-04-19 04:16:17.448748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.448950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.449122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.449132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.449373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.449615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.449624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.449910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.450373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.450696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.450938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.451221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.451457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.451466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.451727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.451964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.451973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.452239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.452503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.452512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.452754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.452887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.452896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.453161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.453274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.453283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.453544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.453733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.453743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.453976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.454092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.454101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.454351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.454551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.454560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.454821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.455075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.455083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.455362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.455535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.455545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.455829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.456175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.456616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.456882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.457156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.457402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.457412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.457576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.457673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.457682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.457775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.458307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.458767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.458967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.459048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.459278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.459287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.459549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.459820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.459829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.460071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.460331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.460340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.095 qpair failed and we were unable to recover it. 00:25:03.095 [2024-04-19 04:16:17.460669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.095 [2024-04-19 04:16:17.460850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.460860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.461112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.461341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.461360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.461613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.461897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.461906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.462112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.462298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.462308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.462610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.462789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.462798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.462977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.463351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.463804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.463930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.464191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.464347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.464356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.464613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.464794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.464804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.464929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.465221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.465632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.465838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.466086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.466253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.466262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.466386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.466606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.466615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.466856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.467376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.467677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.467936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.468201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.468369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.468379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.468633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.468865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.468874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.469070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.469350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.469359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.469533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.469662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.469671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.469835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.470077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.470085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.470325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.470667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.470677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.470908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.471093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.471103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.471334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.471581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.471590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.471869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.472296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.472623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.096 [2024-04-19 04:16:17.472754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.096 qpair failed and we were unable to recover it. 00:25:03.096 [2024-04-19 04:16:17.472951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.473183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.473192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.473396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.473657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.473665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.473842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.474230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.474677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.474813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.475081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.475315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.475324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.475609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.475807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.475816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.476098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.476274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.476283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.476458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.476569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.476577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.476709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.477262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.477550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.477769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.477987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.478273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.478301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.478615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.478840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.478850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.479149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.479349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.479358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.479609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.479859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.479868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.480094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.480330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.480371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.480659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.480977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.481016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.481262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.481542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.481574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.481926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.482156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.482185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.482470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.482711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.482740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.483086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.483299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.483328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.483561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.483883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.483913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.484221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.484553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.484583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.097 qpair failed and we were unable to recover it. 00:25:03.097 [2024-04-19 04:16:17.484825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.485179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.097 [2024-04-19 04:16:17.485208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.485513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.485782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.485811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd1ef90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.486174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.486479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.486509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.486843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.487150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.487180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.487403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.487623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.487651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.487863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.488160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.488169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.488391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.488563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.488571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.488808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.489096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.489125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.489325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.489554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.489584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.489886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.490236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.490738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.490987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.491311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.491532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.491562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.491760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.492044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.492073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.492379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.492603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.492631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.492897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.493179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.493188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.493405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.493604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.493633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.493846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.494141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.494170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.494429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.494751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.494902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.495161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.495170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.495372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.495561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.495590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.495937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.496225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.496254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.496402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.496633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.496662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.497000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.497212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.497245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.497530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.497672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.497700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.497913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.498150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.498158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.098 qpair failed and we were unable to recover it. 00:25:03.098 [2024-04-19 04:16:17.498412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.498684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.098 [2024-04-19 04:16:17.498693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.498924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.499130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.499139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.499351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.499550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.499559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.499807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.500331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.500677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.500889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.501012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.501271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.501280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.501473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.501652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.501663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.501846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.502205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.502654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.502887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.503161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.503420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.503429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.503619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.503797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.503806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.503987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.504162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.504170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.504402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.504674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.504683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.504942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.505205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.505213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.505411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.505673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.505702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.505923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.506242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.506276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.506554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.506792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.506820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.507131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.507500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.507838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.507965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.508225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.508555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.508585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.508794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.508979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.509008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.509250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.509545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.509574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.509819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.509984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.510020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.510175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.510424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.510434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.510661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.510781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.510791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.510986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.511258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.511265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.099 qpair failed and we were unable to recover it. 00:25:03.099 [2024-04-19 04:16:17.511526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.099 [2024-04-19 04:16:17.511643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.511651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.511844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.512275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.512684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.512794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.512960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.513156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.513164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.513260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.513496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.513504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.513759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.514331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.514628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.514865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.514977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.515160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.515169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.515416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.515615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.515625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.515902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.516472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.516837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.516970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.517096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.517265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.517274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.517460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.517663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.517672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.517852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.518261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.518637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.518849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.519134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.519371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.519380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.519539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.519665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.519674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.519872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.520126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.520135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.520412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.520515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.520525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.520757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.521329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.521715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.521937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.522116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.522367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.522376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.522607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.522719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.100 [2024-04-19 04:16:17.522728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.100 qpair failed and we were unable to recover it. 00:25:03.100 [2024-04-19 04:16:17.522846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.523307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.523805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.523921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.524029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.524258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.524266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.524579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.524743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.524752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.524980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.525306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.525642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.525861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.526091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.526258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.526267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.526523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.526716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.526725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.526845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.527059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.527068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.527308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.527587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.527596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.527852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.528310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.528630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.528840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.529048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.529281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.529290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.529472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.529652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.529661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.529936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.530306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.530667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.530860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.531065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.531296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.531305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.531522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.531650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.531659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.531788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.531993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.532001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.532193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.532395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.532404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.101 qpair failed and we were unable to recover it. 00:25:03.101 [2024-04-19 04:16:17.532588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.101 [2024-04-19 04:16:17.532755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.532764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.533022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.533196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.533205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.533461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.533635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.533663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.533953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.534163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.534192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.534464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.534731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.534760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.534962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.535171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.535199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.535502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.535773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.535802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.536078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.536319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.536327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.536501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.536699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.536728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.536956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.537232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.537260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.537562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.537716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.537744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.538042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.538413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.538731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.538944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.539244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.539532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.539562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.539792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.539969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.539998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.540383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.540655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.540685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.540925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.541475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.541806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.541965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.542213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.542520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.102 [2024-04-19 04:16:17.542550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.102 qpair failed and we were unable to recover it. 00:25:03.102 [2024-04-19 04:16:17.542772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.542971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.543000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.543219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.543404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.543434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.543718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.543928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.543956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.544259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.544467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.544496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.544721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.544922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.544951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.545169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.545443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.545473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.545706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.545849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.545878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.546229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.546508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.546539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.546802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.547006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.547035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.547306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.547630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.547663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.547832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.548341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.548650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.548873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.549159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.549434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.549464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.549711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.549955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.549984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.550191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.550454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.550484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.550649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.550945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.550973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.551180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.551351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.551376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.551551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.551750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.551779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.551986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.552214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.552243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.552402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.552560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.552589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.552802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.552979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.553007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.553278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.553477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.553507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.553674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.553826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.553855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.554463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.554493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.554660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.554881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.554910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.103 [2024-04-19 04:16:17.555142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.555398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.103 [2024-04-19 04:16:17.555427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.103 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.555655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.555826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.555855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.556160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.556385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.556416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.556707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.556992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.557001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.557275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.557577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.557608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.557952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.558264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.558292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.558527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.558749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.558778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.559000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.559238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.559266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.559540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.559768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.559797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.560133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.560372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.560402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.560567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.560842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.560870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.561085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.561397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.561427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.561726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.562028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.562056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.562330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.562644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.562673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.563021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.563502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.563759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.563873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.564161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.564398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.564407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.564608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.564838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.564847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.565031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.565200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.565209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.565438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.565562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.565573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.565849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.566178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.566207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.566456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.566670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.566698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.104 qpair failed and we were unable to recover it. 00:25:03.104 [2024-04-19 04:16:17.566929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.104 [2024-04-19 04:16:17.567253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.567281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.567505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.567800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.567829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.567978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.568171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.568180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.568467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.568697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.568726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.568943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.569155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.569183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.569431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.569657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.569685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.569836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.570072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.570101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.570355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.570516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.570550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.570777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.571290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.571720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.571857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.572110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.572383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.572392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.572495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.572706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.572715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.572837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.573138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.573167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.573468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.573710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.573738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.574020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.574291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.574300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.574478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.574650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.574659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.574781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.575042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.575076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.575324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.575638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.575667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.575890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.576227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.576255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.576429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.576733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.576761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.577124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.577429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.577438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.577642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.577820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.577848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.578101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.578320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.578358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.578580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.578712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.578741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.579017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.579308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.579317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.105 [2024-04-19 04:16:17.579508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.579676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.105 [2024-04-19 04:16:17.579684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.105 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.579939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.580133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.580162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.580418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.580637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.580667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.580879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.581140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.581169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.581389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.581641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.581669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.581831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.582158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.582187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.582429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.582599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.582627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.582877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.583013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.583021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.583264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.583545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.583575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.583805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.584059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.584087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.584383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.584529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.584537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.584767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.584974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.585003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.585313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.585569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.585600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.585757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.585903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.585932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.586083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.586350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.586359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.586547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.586746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.586755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.586859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.587072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.587101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.587404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.587574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.587603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.587818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.587980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.588009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.588300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.588535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.588565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.588792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.589019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.589047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.589327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.589524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.589553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.589782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.590172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.590201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.590427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.590718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.106 [2024-04-19 04:16:17.590756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.106 qpair failed and we were unable to recover it. 00:25:03.106 [2024-04-19 04:16:17.590963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.591223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.591232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.591519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.591650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.591659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.591862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.592149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.592178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.592523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.592707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.592751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.592983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.593208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.593217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.593477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.593584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.593593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.593807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.593972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.594001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.594149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.594370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.594400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.594574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.594842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.594887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.595229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.595521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.595531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.595746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.595893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.595922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.596189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.596456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.596466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.596670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.596861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.107 [2024-04-19 04:16:17.596870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.107 qpair failed and we were unable to recover it. 00:25:03.107 [2024-04-19 04:16:17.597058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.597298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.597308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.385 qpair failed and we were unable to recover it. 00:25:03.385 [2024-04-19 04:16:17.597571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.597754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.597763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.385 qpair failed and we were unable to recover it. 00:25:03.385 [2024-04-19 04:16:17.598056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.598235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.598244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.385 qpair failed and we were unable to recover it. 00:25:03.385 [2024-04-19 04:16:17.598497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.598678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.598687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.385 qpair failed and we were unable to recover it. 00:25:03.385 [2024-04-19 04:16:17.598918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.599125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.599134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.385 qpair failed and we were unable to recover it. 00:25:03.385 [2024-04-19 04:16:17.599399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.599523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.385 [2024-04-19 04:16:17.599532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.599766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.599872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.599881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.600051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.600359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.600389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.600716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.601265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.601686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.601954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.602196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.602488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.602518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.602789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.602944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.602973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.603276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.603447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.603477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.603642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.603838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.603867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.604147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.604364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.604373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.604589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.604705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.604714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.605038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.605213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.605242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.605483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.605745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.605774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.605949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.606266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.606295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.606472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.606746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.606775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.606939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.607307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.607336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.607663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.607885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.607914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.608233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.608538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.608568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.608858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.609183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.609191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.609478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.609764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.609793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.609951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.610282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.610311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.610610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.610782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.610811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.611046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.611372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.611402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.611627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.611829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.611857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.612158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.612398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.612423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.612600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.612801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.612810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.612934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.613041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.613049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.613384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.613608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.386 [2024-04-19 04:16:17.613636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.386 qpair failed and we were unable to recover it. 00:25:03.386 [2024-04-19 04:16:17.613791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.613990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.614019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.614273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.614541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.614571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.614877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.615379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.615813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.615935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.616180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.616378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.616387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.616564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.616766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.616795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.616975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.617294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.617322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.617614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.617840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.617869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.618167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.618262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.618272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.618408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.618591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.618600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.618867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.619329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.619812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.619990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.620222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.620554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.620585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.620840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.621172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.621201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.621386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.621626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.621659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.621934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.622314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.622769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.622956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.623179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.623421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.623451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.623655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.623933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.623962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.624120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.624394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.624424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.624648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.624939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.624967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.625292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.625594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.625623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.625959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.626164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.626192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.626371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.626588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.626608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.626842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.627077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.627086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.387 qpair failed and we were unable to recover it. 00:25:03.387 [2024-04-19 04:16:17.627350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.627479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.387 [2024-04-19 04:16:17.627488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.627603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.627835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.627844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.628041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.628312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.628320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.628578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.628829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.628837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.629043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.629289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.629634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.629777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.630064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.630250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.630278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.630507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.630659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.630687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.630940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.631393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.631795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.631993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.632156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.632446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.632476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.632632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.632918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.632947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.633276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.633548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.633578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.633791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.633956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.633985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.634190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.634435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.634465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.634617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.634756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.634784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.635071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.635384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.635397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.635594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.635787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.635829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.636152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.636456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.636470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.636585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.636768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.636777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.637009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.637210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.637239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.637458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.637668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.637697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.637870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.638138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.638147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.638411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.638679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.638904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.639214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.639243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.639558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.639770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.639798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.640103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.640459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.388 qpair failed and we were unable to recover it. 00:25:03.388 [2024-04-19 04:16:17.640701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.388 [2024-04-19 04:16:17.640855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.640884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.641207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.641495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.641505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.641736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.642302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.642631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.642909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.643224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.643510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.643519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.643736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3965922 Killed "${NVMF_APP[@]}" "$@" 00:25:03.389 [2024-04-19 04:16:17.644020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.644029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.644319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.644529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.644538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.644644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 04:16:17 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:25:03.389 [2024-04-19 04:16:17.644877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.644886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 04:16:17 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:03.389 [2024-04-19 04:16:17.645070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 04:16:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:03.389 [2024-04-19 04:16:17.645334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.645350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.645486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 04:16:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:03.389 [2024-04-19 04:16:17.645668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.645677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.389 [2024-04-19 04:16:17.645883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.646329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.646682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.646869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.647120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.647389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.647398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.647631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.647815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.647823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.648102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.648282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.648291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.648508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.648793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.648802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.648911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.649455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.649770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.649963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.650135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.650354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.650363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.650642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.650822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.650832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.651077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.651263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.651272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.651534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.651699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.651707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 [2024-04-19 04:16:17.651887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.652071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.652080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 04:16:17 -- nvmf/common.sh@470 -- # nvmfpid=3966693 00:25:03.389 [2024-04-19 04:16:17.652253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.652366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.389 [2024-04-19 04:16:17.652376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.389 qpair failed and we were unable to recover it. 00:25:03.389 04:16:17 -- nvmf/common.sh@471 -- # waitforlisten 3966693 00:25:03.389 04:16:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:03.390 [2024-04-19 04:16:17.652596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.652708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.652717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 04:16:17 -- common/autotest_common.sh@817 -- # '[' -z 3966693 ']' 00:25:03.390 [2024-04-19 04:16:17.652899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 04:16:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.390 [2024-04-19 04:16:17.653223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.653233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 04:16:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:03.390 [2024-04-19 04:16:17.653499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 04:16:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.390 [2024-04-19 04:16:17.653700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.653709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 04:16:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:03.390 [2024-04-19 04:16:17.653959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.390 [2024-04-19 04:16:17.654150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.654160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.654447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.654564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.654575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.654782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.654964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.654973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.655234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.655415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.655425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.655656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.655782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.655791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.655961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.656394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.656772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.656894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.657173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.657438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.657447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.657559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.657789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.657797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.658048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.658245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.658255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.658523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.658756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.658767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.658951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.659249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.659258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.659540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.659723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.659732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.659859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.660040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.660049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.660313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.660586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.660595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.660812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.661254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.390 qpair failed and we were unable to recover it. 00:25:03.390 [2024-04-19 04:16:17.661637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.390 [2024-04-19 04:16:17.661822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.661942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.662369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.662622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.662807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.662917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.663282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.663700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.663825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.663953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.664331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.664770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.664868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.665061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.665311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.665319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.665583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.665759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.665768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.665882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.666369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.666734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.666920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.667034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.667192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.667201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.667325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.667636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.667645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.667771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.668244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.668736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.668926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.669183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.669341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.669355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.669555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.669673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.669682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.669877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.670269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.670791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.670910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.671201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.671408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.671418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.671584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.671761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.671770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.672029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.672210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.672219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.672453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.672636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.672645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.391 qpair failed and we were unable to recover it. 00:25:03.391 [2024-04-19 04:16:17.672820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.391 [2024-04-19 04:16:17.673065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.673074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.673287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.673545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.673555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.673787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.673909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.673917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.674148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.674380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.674390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.674558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.674737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.674746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.674884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.675335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.675662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.675930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.676084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.676378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.676388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.676653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.676779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.676787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.676971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.677364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.677670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.677794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.677921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.678157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.678507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.678755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.678925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.679042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.679329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.679526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.679722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.679836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.679949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.680161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.680435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.680638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.680801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.680918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.681121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.681472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.681788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.392 [2024-04-19 04:16:17.681924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.392 qpair failed and we were unable to recover it. 00:25:03.392 [2024-04-19 04:16:17.682018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.682226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.682421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.682646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.682845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.682935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.683280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.683486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.683770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.683880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.683999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.684232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.684510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.684784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.684894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.684995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.685331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.685580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.685799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.685902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.686067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.686536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.686750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.686860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.686962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.687170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.687524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.687727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.687840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.688007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.688273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.688639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.688826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.688916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.689005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.689013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.393 qpair failed and we were unable to recover it. 00:25:03.393 [2024-04-19 04:16:17.689112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.689216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.393 [2024-04-19 04:16:17.689227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.689336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.689549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.689758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.689878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.689978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.690257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.690520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.690731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.690838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.690960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.691304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.691513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.691783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.691959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.692117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.692328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.692529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.692733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.692905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.693081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.693323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.693521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.693749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.693855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.693958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.694161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.694362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.694551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.694742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.694841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.694999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.695158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.695166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.394 qpair failed and we were unable to recover it. 00:25:03.394 [2024-04-19 04:16:17.695255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.394 [2024-04-19 04:16:17.695348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.695357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.695450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.695626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.695634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.695737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.695851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.695859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.695950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.696150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.696359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.696706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.696822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.696924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.697117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.697401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.697619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.697811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.697942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.698049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.698348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.698625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.698739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.698826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.699158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.699493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.699686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.699875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.699966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.700162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.700360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.700515] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:25:03.395 [2024-04-19 04:16:17.700570] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.395 [2024-04-19 04:16:17.700631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.700841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.700957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.701054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.701247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.395 [2024-04-19 04:16:17.701442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.395 [2024-04-19 04:16:17.701545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.395 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.701631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.701790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.701798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.701963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.702151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.702424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.702618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.702849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.702945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.703057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.703267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.703640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.703813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.703921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.704139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.704478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.704687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.704878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.705038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.705129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.705138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.705304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.705534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.705544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.705824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.706228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.706526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.706735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.706845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.706932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.707201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.707539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.707835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.707995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.708171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.708381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.708680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.708787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.708943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.709112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.709121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.709347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.709509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.709517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.396 qpair failed and we were unable to recover it. 00:25:03.396 [2024-04-19 04:16:17.709606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.396 [2024-04-19 04:16:17.709701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.709710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.709800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.709958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.709967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.710132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.710529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.710812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.710931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.711020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.711438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.711777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.711955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.712083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.712296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.712641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.712808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.712966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.713205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.713406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.713628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.713815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.713914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.714177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.714451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.714779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.714889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.715048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.715383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.715587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.715864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.715981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.716143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.716329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.716610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.716719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.716882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.717176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.717387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.397 [2024-04-19 04:16:17.717492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.397 qpair failed and we were unable to recover it. 00:25:03.397 [2024-04-19 04:16:17.717591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.717700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.717709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.717899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.717988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.717997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.718140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.718410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.718632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.718776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.718945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.719314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.719531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.719860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.719971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.720135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.720357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.720645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.720837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.720944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.721297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.721501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.721780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.721890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.721994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.722266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.722481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.722855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.722982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.723104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.723320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.723605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.723806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.723920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.724195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.724408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.724700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.724820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.724996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.725098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.725107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.398 [2024-04-19 04:16:17.725214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.725322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.398 [2024-04-19 04:16:17.725331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.398 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.725510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.725672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.725680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.725784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.725971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.725980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.726140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.726409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.726685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.726863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.727128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.727512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.727846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.727947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.728044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.728335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.728656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.728823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.728931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.729209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.729592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.729880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.729976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.730160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.730488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.730497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.730622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.730736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.730744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.730844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.731327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.731671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.731877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.732063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.732229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.732237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.732490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.732653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.732662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.732762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.733005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.733013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.733242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.733360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.733370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.399 qpair failed and we were unable to recover it. 00:25:03.399 [2024-04-19 04:16:17.733533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.399 [2024-04-19 04:16:17.733795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.733804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.733982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.734252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.734607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.734859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.734960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.735338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.735702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.735810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.735980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.736295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.736653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.736819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.736914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.737277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.737616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.737891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.737994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.738108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.738454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.738676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.738788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.738965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.400 [2024-04-19 04:16:17.739245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.739522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.739794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.739919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.740025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.740298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.740668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.740853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.741031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.741305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.741606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.741772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.742007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.742110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.742118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.742314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.742424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.400 [2024-04-19 04:16:17.742433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.400 qpair failed and we were unable to recover it. 00:25:03.400 [2024-04-19 04:16:17.742543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.742654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.742664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.742826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.743110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.743387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.743683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.743885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.743998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.744174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.744472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.744671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.744785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.744957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.745243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.745521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.745806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.745908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.746002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.746353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.746634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.746743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.746916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.747191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.747514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.747632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.747804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.748183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.748414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.748636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.748735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.748828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.749118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.749433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.749695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.749868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.749974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.750079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.750088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.401 [2024-04-19 04:16:17.750245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.750336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.401 [2024-04-19 04:16:17.750350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.401 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.750437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.750542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.750550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.750711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.750803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.750813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.750908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.751206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.751602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.751867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.751966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.752169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.752428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.752643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.752814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.752920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.753206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.753541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.753828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.753996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.754174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.754366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.754736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.754843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.755004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.755264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.755558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.755745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.755929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.756022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.756281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.756628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.756872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.756979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.757158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.757260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.757269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.757442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.757670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.757679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.757900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.758000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.758009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.758102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.758194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.758203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.402 qpair failed and we were unable to recover it. 00:25:03.402 [2024-04-19 04:16:17.758295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.402 [2024-04-19 04:16:17.758389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.758399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.758508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.758602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.758611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.758705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.758878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.758887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.758979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.759259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.759440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.759636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.759821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.759919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.760079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.760266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.760592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.760794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.760889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.761060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.761337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.761616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.761800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.761909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.762138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.762338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.762538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.762894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.762994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.763161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.763381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.763620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.763886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.763974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.764184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.764403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.764589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.764737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.764827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.765072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.765080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.765179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.765366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.765375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.403 qpair failed and we were unable to recover it. 00:25:03.403 [2024-04-19 04:16:17.765570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.403 [2024-04-19 04:16:17.765662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.765671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.765765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.765853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.765862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.765954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.766274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.766509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.766786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.766988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.767090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.767376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.767593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.767805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.767919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.768031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.768266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.768545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.768730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.768841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.768960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.769240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.769440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.769630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.769852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.769970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.770131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.770414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.770696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.770935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.771041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.771259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.404 qpair failed and we were unable to recover it. 00:25:03.404 [2024-04-19 04:16:17.771477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.404 [2024-04-19 04:16:17.771652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.771849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.771956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.771965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.772111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.772349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.772646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.772822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.772915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.773193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.773449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.773793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.773935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.774059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.774330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.774601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.774808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.774940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c8000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.775167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.775432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.775716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.775839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.776012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.776367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.776588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.776792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.776887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.777000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.777223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.777414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.777852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.777994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.778087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.778275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.778492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.778813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.778915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.779010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.779110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.779119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.405 qpair failed and we were unable to recover it. 00:25:03.405 [2024-04-19 04:16:17.779211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.405 [2024-04-19 04:16:17.779301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.779310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.779472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.779639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.779648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.779823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.779996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.780110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.780291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.780561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.780768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.780864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.780954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.781214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.781506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.781786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.781917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.782033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.782261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.782599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.782833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.782993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.783101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.783298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.783516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.783863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.784034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.784231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.784431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.784641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.784820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.784917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.785159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.785431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.785625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.785781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.785954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.786130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.786362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.406 [2024-04-19 04:16:17.786480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.406 qpair failed and we were unable to recover it. 00:25:03.406 [2024-04-19 04:16:17.786564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.786657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.786666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.786772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.786872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.786881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.787055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.787278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.787480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.787762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.787848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.788009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.788111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.788120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.788313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.788591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.788600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.788716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.789157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.789359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.789706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.789889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.789953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.407 [2024-04-19 04:16:17.790059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.790171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.790463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.790693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.790885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.790990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.791089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.791291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.791496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.791671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.791787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.791951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.792311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.792572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.792848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.792961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.793190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.793568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.793819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.793935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.407 qpair failed and we were unable to recover it. 00:25:03.407 [2024-04-19 04:16:17.794150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.407 [2024-04-19 04:16:17.794246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.794355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.794569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.794819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.794933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.795017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.795275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.795535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.795778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.795876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.795973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.796229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.796487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.796758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.796998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.797163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.797399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.797797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.797967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.798067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.798349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.798704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.798941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.408 qpair failed and we were unable to recover it. 00:25:03.408 [2024-04-19 04:16:17.799099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.799198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.408 [2024-04-19 04:16:17.799207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.409 qpair failed and we were unable to recover it. 00:25:03.409 [2024-04-19 04:16:17.799306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.409 [2024-04-19 04:16:17.799407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.799416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.799512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.799668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.799678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.799768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.799911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.799922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.800025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.800360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.800557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.800887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.800991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.801099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.801352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.801361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.801531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.801711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.801720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.801807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.802120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.802553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.802757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.802945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.803144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.803472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.803737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.803864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.804113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.804428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.804645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.804752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.804865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.805140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.805412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.805613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.805733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.805901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.806194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.806399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.806733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.806836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.806939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.807096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.807105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.807366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.410 [2024-04-19 04:16:17.807375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.410 qpair failed and we were unable to recover it. 00:25:03.410 [2024-04-19 04:16:17.807469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.807575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.807585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.807677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.807859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.807868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.808029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.808361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.808580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.808841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.808945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.809036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.809223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.809568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.809829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.809934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.810033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.810336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.810614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.810722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.810820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.811173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.811377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.811577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.811746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.811923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.812213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.812561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.812826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.812938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.813052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.813332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.813610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.813897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.813991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.814000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.814173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.814355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.814365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.814537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.814692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.814701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.814860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.815022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.815031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.411 [2024-04-19 04:16:17.815139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.815312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.411 [2024-04-19 04:16:17.815321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.411 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.815512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.815612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.815621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.815742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.815851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.815861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.816079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.816368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.816634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.816898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.817071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.817291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.817571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.817866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.817967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.818077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.818364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.818663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.818780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.818942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.819132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.819424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.819717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.819898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.820195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.820550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.820864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.820978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.821084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.821379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.821641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.821749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.821841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.822173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.822439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.822649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.822818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.822977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.823076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.823085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.823175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.823368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.823378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.412 qpair failed and we were unable to recover it. 00:25:03.412 [2024-04-19 04:16:17.823489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.412 [2024-04-19 04:16:17.823581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.823590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.823764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.823930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.823939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.824032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.824437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.824850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.824998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.825110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.825383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.825658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.825885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.825993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.826091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.826281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.826559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.826748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.826845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.827129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.827399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.827618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.827892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.827990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.828093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.828278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.828557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.828825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.828938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.829045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.829315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.829536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.829801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.829904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.830069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.830302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.830492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.830765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.413 [2024-04-19 04:16:17.830969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.413 qpair failed and we were unable to recover it. 00:25:03.413 [2024-04-19 04:16:17.831078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.831266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.831550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.831824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.831927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.832038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.832256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.832537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.832736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.832900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.833315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.833548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.833857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.833977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.834092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.834497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.834767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.834895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.835058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.835411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.835708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.835903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.836011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.836288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.836511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.836738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.836839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.836946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.837204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.837426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.414 [2024-04-19 04:16:17.837681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.414 [2024-04-19 04:16:17.837850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.414 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.837959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.838307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.838631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.838816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.838976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.839257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.839470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.839758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.839937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.840056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.840334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.840680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.840790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.840925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.841212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.841495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.841776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.841966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.842057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.842311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.842321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.842488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.842665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.842674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.842835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.843169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.843621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.843846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.843948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.844043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.844330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.844626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.844795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.844897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.845228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.845457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.845668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.845883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.845979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.846147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.846156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.415 qpair failed and we were unable to recover it. 00:25:03.415 [2024-04-19 04:16:17.846265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.415 [2024-04-19 04:16:17.846435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.846445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.846613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.846786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.846795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.846888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.847171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.847448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.847745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.847864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.848029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.848257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.848550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.848669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.848851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.849300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.849564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.849765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.849952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.850094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.850442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.850701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.850938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.851051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.851252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.851458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.851811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.851924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.852086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.852360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.852581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.852818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.852951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.853225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.853452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.853786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.853915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.854012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.854100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.854109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.854271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.854357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.416 [2024-04-19 04:16:17.854366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.416 qpair failed and we were unable to recover it. 00:25:03.416 [2024-04-19 04:16:17.854486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.854630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.854639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.854725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.854905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.854914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.855059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.855324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.855536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.855822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.855939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.856062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.856295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.856578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.856790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.856958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.857055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.857251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.857600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.857824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.857990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.858284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.858500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.858780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.858880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.858992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.859183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.859483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.859689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.859858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.859950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.860212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.860445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.860649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.860870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.861027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.861411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.861643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.861838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.861935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.862203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.862381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.417 [2024-04-19 04:16:17.862390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.417 qpair failed and we were unable to recover it. 00:25:03.417 [2024-04-19 04:16:17.862480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.862711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.862720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.862899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.863185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.863402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.863668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.863835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.863961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.864158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.864526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.864737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.864846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.864938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.865222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.865450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.865795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.865983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.866086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.866292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.866497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.866766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.866892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.866984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.867255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.867652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.867834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.867935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.868165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.868382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.868561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.868835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.868993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.869002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.869127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.869225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.869234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.869396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.869497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.418 [2024-04-19 04:16:17.869506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.418 qpair failed and we were unable to recover it. 00:25:03.418 [2024-04-19 04:16:17.869599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.869696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.869704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.869863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.869957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.869966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.870070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.870294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.870550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.870735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.870921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.871090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.871330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.871733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.871818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.871932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.872149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.872328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.872666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.872852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.872956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.873278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.873547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.873763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.873870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.873994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.874205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.874472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.874736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.874952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.875147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.875457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.875818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.875981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.876082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.876272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876561] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.419 [2024-04-19 04:16:17.876579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.876594] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.419 [2024-04-19 04:16:17.876605] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.419 [2024-04-19 04:16:17.876613] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.419 [2024-04-19 04:16:17.876622] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.419 [2024-04-19 04:16:17.876758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:03.419 [2024-04-19 04:16:17.876858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.419 [2024-04-19 04:16:17.876866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.419 qpair failed and we were unable to recover it. 00:25:03.419 [2024-04-19 04:16:17.876900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:03.419 [2024-04-19 04:16:17.877014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:03.419 [2024-04-19 04:16:17.877029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.877014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:25:03.420 [2024-04-19 04:16:17.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.877507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.877623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.877784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.878137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.878340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.878710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.878812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.878925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.879220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.879421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.879640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.879852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.879935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.880027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.880361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.880675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.880853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.880965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.881034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.881241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.881528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.881788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.881958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.882050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.882314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.882525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.882791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.882935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.883189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.883405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.883662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.883901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.884062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.884268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.884534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.884745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.884860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.885049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.885224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.420 [2024-04-19 04:16:17.885232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.420 qpair failed and we were unable to recover it. 00:25:03.420 [2024-04-19 04:16:17.885410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.885648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.885657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.885753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.885858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.885867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.886168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.886341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.886355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.886533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.886715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.886725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.886950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.887228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.887447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.887768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.887968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.888135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.888310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.888321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.888559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.888752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.888767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.888894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.889210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.889616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.889785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.889867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.890239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.890536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.890827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.890903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.891071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.891322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.891561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.891685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.891854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.892337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.421 [2024-04-19 04:16:17.892749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.421 [2024-04-19 04:16:17.892890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.421 qpair failed and we were unable to recover it. 00:25:03.688 [2024-04-19 04:16:17.893086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.688 [2024-04-19 04:16:17.893300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.688 [2024-04-19 04:16:17.893309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.893474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.893591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.893600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.893730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.893905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.893914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.894025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.894459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.894836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.894963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.895072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.895377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.895387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.895557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.895667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.895676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.895877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.896414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.896693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.896815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.897023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.897423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.897794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.897949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.898221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.898496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.898505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.898758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.898878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.898887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.899126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.899425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.899434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.899565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.899742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.899750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.899873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.900266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.900692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.900946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.901210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.901514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.901810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.901976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.902164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.902402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.902411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.902592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.902709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.902718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.902973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.903365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.689 [2024-04-19 04:16:17.903731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.689 [2024-04-19 04:16:17.903867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.689 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.904087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.904363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.904372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.904489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.904671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.904679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.904821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.905257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.905571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.905705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.905875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.906330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.906729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.906814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.906985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.907262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.907754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.907949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.908196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.908456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.908466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.908642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.908804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.908813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.909097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.909327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.909336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.909523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.909786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.909795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.910017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.910391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.910707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.910972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.911169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.911452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.911461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.911659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.911818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.911827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.912010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.912272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.912281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.912528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.912786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.912795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.912957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.913231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.913587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.913757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.913879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.914159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.690 qpair failed and we were unable to recover it. 00:25:03.690 [2024-04-19 04:16:17.914564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.690 [2024-04-19 04:16:17.914765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.914890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.915325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.915547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.915784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.915968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.916158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.916315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.916324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.916495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.916669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.916678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.916839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.917218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.917596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.917784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.918004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.918438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.918797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.918873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.918999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.919410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.919635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.919853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.920182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.920456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.920465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.920642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.920765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.920773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.920942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.921221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.921230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.921475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.921635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.921644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.921863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.921992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.922001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.922248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.922505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.922515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.922693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.922822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.922832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.922997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.923375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.923606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.923850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.924026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.924258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.924267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.691 qpair failed and we were unable to recover it. 00:25:03.691 [2024-04-19 04:16:17.924477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.691 [2024-04-19 04:16:17.924587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.924595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.924818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.924949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.924959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.925223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.925423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.925434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.925565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.925743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.925752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.925937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.926217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.926226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.926429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.926613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.926621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.926876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.927055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.927064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.927325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.927660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.927670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.927848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.928246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.928499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.928868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.928982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.929171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.929400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.929409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.929762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.929771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.929968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.930223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.930233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.930468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.930648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.930658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.930819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.931183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.931502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.931783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.931914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.932111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.932367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.932377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.932544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.932653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.932662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.932879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.933157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.933166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.933376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.933661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.933670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.933803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.934198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.934481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.934765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.934887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.692 qpair failed and we were unable to recover it. 00:25:03.692 [2024-04-19 04:16:17.935144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.935413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.692 [2024-04-19 04:16:17.935422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.935545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.935751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.935760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.936050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.936287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.936296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.936550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.936753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.936762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.936961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.937404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.937848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.938040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.938423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.938795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.938975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.939205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.939536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.939745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.939982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.940159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.940404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.940413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.940585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.940838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.940847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.941098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.941275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.941284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.941455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.941627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.941636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.941878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.942245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.942445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.942747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.942856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.943156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.943265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.943273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.693 [2024-04-19 04:16:17.943475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.943683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.693 [2024-04-19 04:16:17.943692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.693 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.943874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.944409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.944785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.944897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.945136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.945392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.945401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.945516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.945720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.945729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.945995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.946445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.946703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.946826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.946989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.947271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.947279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.947523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.947649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.947658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.947920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.948215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.948641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.948830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.949088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.949341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.949365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.949565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.949742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.949753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.950047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.950300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.950310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.950470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.950735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.950746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.950984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.951473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.951845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.951961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.952243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.952514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.952523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.952692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.952894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.952902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.953176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.953416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.953425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.953655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.953841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.953849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.954064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.954298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.954306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.954507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.954806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.954815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.954979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.955296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.955305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.955537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.955789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.694 [2024-04-19 04:16:17.955797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.694 qpair failed and we were unable to recover it. 00:25:03.694 [2024-04-19 04:16:17.955930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.956404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.956793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.956930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.957186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.957369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.957378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.957618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.957722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.957731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.958025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.958223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.958231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.958461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.958616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.958624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.958859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.959265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.959514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.959702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.959932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.960225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.960233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.960503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.960745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.960753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.961007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.961194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.961203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.961462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.961669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.961678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.961909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.962149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.962158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.962445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.962674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.962683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.962945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.963223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.963232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.963517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.963698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.963706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.963966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.964263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.964272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.964501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.964617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.964626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.964797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.965293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.965733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.965916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.966171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.966424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.966432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.966690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.966779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.966787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.966982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.967142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.967151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.695 qpair failed and we were unable to recover it. 00:25:03.695 [2024-04-19 04:16:17.967415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.695 [2024-04-19 04:16:17.967621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.967630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.967942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.968140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.968149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.968415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.968597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.968606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.968833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.969088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.969097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.969347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.969533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.969542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.969745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.970271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.970718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.970981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.971240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.971425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.971436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.971632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.971893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.971902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.972022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.972286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.972294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.972459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.972655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.972664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.972841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.973098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.973106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.973288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.973538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.973546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.973778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.974121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.974528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.974691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.974940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.975241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.975640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.975876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.976064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.976318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.976326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.976619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.976872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.976881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.977169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.977340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.977356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.977616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.977869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.977878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.978135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.978394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.978403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.978689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.978848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.978857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.979019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.979289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.979298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.979526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.979794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.979803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.696 [2024-04-19 04:16:17.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.980330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.696 [2024-04-19 04:16:17.980340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.696 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.980584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.980841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.980850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.981023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.981259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.981267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.981511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.981784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.981792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.982024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.982194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.982203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.982365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.982532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.982541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.982801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.983030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.983038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.983306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.983600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.983609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.983848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.984116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.984125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.984385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.984546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.984555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.984832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.985354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.985776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.985974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.986090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.986329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.986338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.986581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.986864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.986872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.987033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.987253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.987261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.987431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.987693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.987702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.988026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.988287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.988295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.988558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.988791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.988799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.989068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.989332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.989340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.989521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.989772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.989781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.989963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.990235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.990243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.990508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.990682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.990690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.990943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.991202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.991210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.991464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.991711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.991720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.991890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.992065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.992074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.697 qpair failed and we were unable to recover it. 00:25:03.697 [2024-04-19 04:16:17.992318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.697 [2024-04-19 04:16:17.992510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.992519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.992777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.993164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.993557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.993740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.993905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.994083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.994092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.994355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.994550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.994558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.994730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 04:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.698 [2024-04-19 04:16:17.994900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.994909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.995097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 04:16:17 -- common/autotest_common.sh@850 -- # return 0 00:25:03.698 [2024-04-19 04:16:17.995325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.995334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 04:16:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:03.698 [2024-04-19 04:16:17.995616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 04:16:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:03.698 [2024-04-19 04:16:17.995871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.995881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 04:16:17 -- common/autotest_common.sh@10 -- # set +x 00:25:03.698 [2024-04-19 04:16:17.996133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.996385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.996394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.996595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.996853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.996862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.997114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.997288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.997296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.997459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.997724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.997733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.997856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.998017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.998028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.998313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.998540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.998549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.998782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.999171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:17.999612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:17.999857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.698 [2024-04-19 04:16:18.000030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:18.000217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.698 [2024-04-19 04:16:18.000228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.698 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.000533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.000779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.000788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.001023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.001252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.001261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.001492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.001684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.001693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.001866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.002142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.002151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.002409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.002662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.002671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.002922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.003410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.003848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.003976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.004145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.004400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.004410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.004583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.004894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.004903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.005081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.005420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.005801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.005968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.006231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.006518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.006528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.006686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.006962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.006971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.007264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.007459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.007468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.007743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.007921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.007930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.008140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.008315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.008324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.008556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.008812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.008821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.009072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.009331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.009340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.009600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.009777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.009785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.009969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.010448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.010845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.010973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.011220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.011394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.011403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.011657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.011909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.011918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.012137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.012418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.012428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.699 [2024-04-19 04:16:18.012589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.012782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.699 [2024-04-19 04:16:18.012791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.699 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.012970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.013193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.013202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.013377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.013506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.013515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.013821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.014204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.014593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.014829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.015095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.015362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.015372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.015610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.015872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.015881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.016110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.016340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.016353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.016619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.016883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.016892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.017097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.017354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.017363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.017566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.017735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.017744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.017996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.018274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.018755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.018944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.019116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.019276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.019285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.019538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.019649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.019658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.019835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.020182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.020626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.020781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.020972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.021334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.021634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.700 [2024-04-19 04:16:18.021810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.700 qpair failed and we were unable to recover it. 00:25:03.700 [2024-04-19 04:16:18.022023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.022450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.022775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.022913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.023193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.023411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.023421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.023708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.023999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.024008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.024187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.024480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.024491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.024697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.024902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.024910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.025155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.025321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.025329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.025507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.025797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.025806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.026040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.026228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.026237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.026411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.026641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.026649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.026821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.027161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.027549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.027751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.028000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.028120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.028129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.028232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.028524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.028535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.028736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.028991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.029000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.029179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.029424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.029433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.029662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.029772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.029780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.030074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.030244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.030254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 04:16:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.701 [2024-04-19 04:16:18.030385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.030544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.030553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.030670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 04:16:18 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:03.701 [2024-04-19 04:16:18.030879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.030888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.701 [2024-04-19 04:16:18.031181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.701 [2024-04-19 04:16:18.031362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.031373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.031617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.031801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.031809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.031918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.032422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.032788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.032964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.701 qpair failed and we were unable to recover it. 00:25:03.701 [2024-04-19 04:16:18.033233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.701 [2024-04-19 04:16:18.033420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.033429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.033619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.033740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.033748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.033982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.034176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.034184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.034421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.034649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.034658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.034943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.035194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.035203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.035460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.035660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.035668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.035859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.036119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.036127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.036402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.036627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.036636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.036943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.037205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.037213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.037472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.037720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.037729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.038017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.038272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.038281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.038472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.038725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.038734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.038968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.039136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.039145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.039316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.039575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.039583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.039840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.040351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.040734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.040942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.041195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.041365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.041375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.041588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.041698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.041706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.042001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.042437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.042680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.042867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.043042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.043294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.043303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.702 qpair failed and we were unable to recover it. 00:25:03.702 [2024-04-19 04:16:18.043471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.702 [2024-04-19 04:16:18.043589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.043597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.043826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.044199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.044657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.044776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.045029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.045229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.045238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.045496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.045733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.045742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.045907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.046159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.046457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.046776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.046942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.047177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.047374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.047383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.047556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.047835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.047845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.048100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.048279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.048289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.048555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.048721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.048731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.048941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.049341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.049676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.049867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.050116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.050381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.050390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.703 [2024-04-19 04:16:18.050510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.050741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.703 [2024-04-19 04:16:18.050750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.703 qpair failed and we were unable to recover it. 00:25:03.704 [2024-04-19 04:16:18.050927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.051210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.051219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.704 qpair failed and we were unable to recover it. 00:25:03.704 [2024-04-19 04:16:18.051486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.051729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.051738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.704 qpair failed and we were unable to recover it. 00:25:03.704 [2024-04-19 04:16:18.051908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.052154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.052162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.704 qpair failed and we were unable to recover it. 00:25:03.704 [2024-04-19 04:16:18.052423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.052680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.704 [2024-04-19 04:16:18.052689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.704 qpair failed and we were unable to recover it. 00:25:03.704 [2024-04-19 04:16:18.052957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.053197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.053206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.053379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 Malloc0 00:25:03.705 [2024-04-19 04:16:18.053555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.053564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.053684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.053870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.053879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.054115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.705 [2024-04-19 04:16:18.054424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.054434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 04:16:18 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:03.705 [2024-04-19 04:16:18.054718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.705 [2024-04-19 04:16:18.054845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.054854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.705 [2024-04-19 04:16:18.055105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.055372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.055381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.055544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.055721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.055730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.056022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.056301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.056310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.056546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.056712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.056721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.056826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.057257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.057622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.057890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.058176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.058354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.058542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.058856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.058865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.059112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.059370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.059379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.059559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.059813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.059821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.060066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.060243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.060252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.060511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.060783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.060791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.061020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.061036] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.705 [2024-04-19 04:16:18.061292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.061301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.061515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.061724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.061733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.061988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.062240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.062248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.705 [2024-04-19 04:16:18.062498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.062770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.705 [2024-04-19 04:16:18.062778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.705 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.062951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.063142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.063150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.063324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.063608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.063618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.063857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.064084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.064093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.064302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.064487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.064496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.064750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.065196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.065688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.065869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.066158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.066388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.066397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.066660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.066930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.066939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.067130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.067317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.067327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.067425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.067553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.067561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.067784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.068039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.068047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.068250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.068503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.068513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.068820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.069078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.069087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.069350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.069591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.069600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.706 [2024-04-19 04:16:18.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 04:16:18 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.706 [2024-04-19 04:16:18.070129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.070139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.070368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.706 [2024-04-19 04:16:18.070596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.070606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.706 [2024-04-19 04:16:18.070838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.071074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.071083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.071334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.071546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.071555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.071840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.072264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.072687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.072855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.073095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.073327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.073335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.073548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.073853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.073862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.074045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.074317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.074326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.074503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.074758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.074767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.074940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.075187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.075196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.706 qpair failed and we were unable to recover it. 00:25:03.706 [2024-04-19 04:16:18.075474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.706 [2024-04-19 04:16:18.075706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.075715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.075966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.076358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.076733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.076898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.077056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.077240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.077249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.077456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.707 [2024-04-19 04:16:18.077730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.077739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.077916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 04:16:18 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.707 [2024-04-19 04:16:18.078143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.078152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.707 [2024-04-19 04:16:18.078391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.707 [2024-04-19 04:16:18.078638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.078648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.078826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.079183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.079631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.079909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.080172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.080417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.080426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.080648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.080924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.080933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.081193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.081407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.081415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.081649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.081851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.081859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.082133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.082311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.082319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.082573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.082804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.082812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.083055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.083309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.083317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.083505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.083717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.083726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.084021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.084251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.084259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.084521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.084716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.084725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.084982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.085237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.085245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.085498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.085730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.085739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.085941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 04:16:18 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.707 [2024-04-19 04:16:18.086166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.086175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 [2024-04-19 04:16:18.086286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.707 [2024-04-19 04:16:18.086518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.707 [2024-04-19 04:16:18.086528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.707 qpair failed and we were unable to recover it. 00:25:03.707 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.708 [2024-04-19 04:16:18.086812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.087088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.087097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.087363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.087618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.087627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.087873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.088138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.088147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.088350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.088639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.088648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.088853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.089103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.708 [2024-04-19 04:16:18.089112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83c0000b90 with addr=10.0.0.2, port=4420 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.089285] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.708 [2024-04-19 04:16:18.091613] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.091706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.091726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.091733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.091739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.091756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.708 04:16:18 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:03.708 04:16:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.708 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.708 [2024-04-19 04:16:18.101592] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.101670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.101686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.101692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.101697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.101712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 04:16:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.708 04:16:18 -- host/target_disconnect.sh@58 -- # wait 3966055 00:25:03.708 [2024-04-19 04:16:18.111629] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.111704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.111719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.111725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.111730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.111745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.121628] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.121762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.121805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.121811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.121817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.121831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.131661] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.131738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.131753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.131759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.131764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.131779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.141678] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.141748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.141764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.141769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.141775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.141789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.151713] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.151786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.151801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.151808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.151813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.151826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.161751] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.161857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.161872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.161878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.161884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.161897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.171702] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.171776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.171790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.171799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.171805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.171818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.708 [2024-04-19 04:16:18.181765] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.708 [2024-04-19 04:16:18.181838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.708 [2024-04-19 04:16:18.181852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.708 [2024-04-19 04:16:18.181858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.708 [2024-04-19 04:16:18.181864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.708 [2024-04-19 04:16:18.181877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.708 qpair failed and we were unable to recover it. 00:25:03.709 [2024-04-19 04:16:18.191815] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.709 [2024-04-19 04:16:18.191889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.709 [2024-04-19 04:16:18.191903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.709 [2024-04-19 04:16:18.191909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.709 [2024-04-19 04:16:18.191914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.709 [2024-04-19 04:16:18.191928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.709 qpair failed and we were unable to recover it. 00:25:03.709 [2024-04-19 04:16:18.201910] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.709 [2024-04-19 04:16:18.202036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.709 [2024-04-19 04:16:18.202050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.709 [2024-04-19 04:16:18.202056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.709 [2024-04-19 04:16:18.202062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.709 [2024-04-19 04:16:18.202076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.709 qpair failed and we were unable to recover it. 00:25:03.967 [2024-04-19 04:16:18.211899] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.967 [2024-04-19 04:16:18.211974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.967 [2024-04-19 04:16:18.211992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.967 [2024-04-19 04:16:18.211998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.967 [2024-04-19 04:16:18.212003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.967 [2024-04-19 04:16:18.212018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.967 qpair failed and we were unable to recover it. 00:25:03.967 [2024-04-19 04:16:18.221887] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.967 [2024-04-19 04:16:18.221965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.967 [2024-04-19 04:16:18.221980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.967 [2024-04-19 04:16:18.221986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.967 [2024-04-19 04:16:18.221992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.967 [2024-04-19 04:16:18.222005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.967 qpair failed and we were unable to recover it. 00:25:03.967 [2024-04-19 04:16:18.231948] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.967 [2024-04-19 04:16:18.232018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.967 [2024-04-19 04:16:18.232032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.967 [2024-04-19 04:16:18.232039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.967 [2024-04-19 04:16:18.232044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.967 [2024-04-19 04:16:18.232058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.967 qpair failed and we were unable to recover it. 00:25:03.967 [2024-04-19 04:16:18.241944] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.967 [2024-04-19 04:16:18.242024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.967 [2024-04-19 04:16:18.242041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.967 [2024-04-19 04:16:18.242047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.967 [2024-04-19 04:16:18.242052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.967 [2024-04-19 04:16:18.242067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.967 qpair failed and we were unable to recover it. 00:25:03.967 [2024-04-19 04:16:18.251992] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.967 [2024-04-19 04:16:18.252071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.967 [2024-04-19 04:16:18.252087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.967 [2024-04-19 04:16:18.252093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.252098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.252112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.262084] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.262155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.262170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.262181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.262186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.262200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.272121] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.272194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.272209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.272215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.272220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.272233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.282103] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.282178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.282192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.282198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.282203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.282216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.292113] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.292234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.292248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.292254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.292259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.292273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.302175] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.302248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.302262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.302269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.302274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.302287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.312163] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.312240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.312255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.312261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.312266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.312279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.322180] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.322250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.322264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.322270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.322275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.322288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.332227] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.332298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.332312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.332318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.332323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.332337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.342284] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.342356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.342370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.342376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.342381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.342395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.352339] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.352424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.352441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.352447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.352452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.352466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.362314] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.362396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.362410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.362416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.968 [2024-04-19 04:16:18.362421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.968 [2024-04-19 04:16:18.362434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.968 qpair failed and we were unable to recover it. 00:25:03.968 [2024-04-19 04:16:18.372354] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.968 [2024-04-19 04:16:18.372427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.968 [2024-04-19 04:16:18.372441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.968 [2024-04-19 04:16:18.372447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.372452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.372466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.382304] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.382411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.382425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.382431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.382436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.382450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.392424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.392492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.392506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.392512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.392517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.392532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.402440] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.402513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.402527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.402533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.402538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.402551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.412540] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.412617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.412631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.412637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.412642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.412656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.422528] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.422617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.422631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.422637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.422642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.422656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.432514] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.432584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.432598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.432604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.432609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.432622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.442549] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.442643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.442660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.442666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.442671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.442684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.452554] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.452633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.452648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.452654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.452659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.452673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.462603] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.462676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.462690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.462697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.462702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.462715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.472623] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.472693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.472707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.969 [2024-04-19 04:16:18.472712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.969 [2024-04-19 04:16:18.472717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.969 [2024-04-19 04:16:18.472730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.969 qpair failed and we were unable to recover it. 00:25:03.969 [2024-04-19 04:16:18.482595] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.969 [2024-04-19 04:16:18.482668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.969 [2024-04-19 04:16:18.482682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.970 [2024-04-19 04:16:18.482688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.970 [2024-04-19 04:16:18.482696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.970 [2024-04-19 04:16:18.482709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.970 qpair failed and we were unable to recover it. 00:25:03.970 [2024-04-19 04:16:18.492654] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.970 [2024-04-19 04:16:18.492731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.970 [2024-04-19 04:16:18.492749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.970 [2024-04-19 04:16:18.492755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.970 [2024-04-19 04:16:18.492761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:03.970 [2024-04-19 04:16:18.492775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.970 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.502758] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.502836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.502853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.502860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.502865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.502880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.512714] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.512787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.512802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.512809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.512814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.512827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.522764] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.522835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.522849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.522855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.522860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.522874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.532786] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.532862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.532876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.532882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.532887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.532901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.542761] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.542839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.542854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.542860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.542866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.542879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.552844] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.552961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.552975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.552982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.552987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.553001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.562904] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.562974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.562987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.562994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.562999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.563012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.572926] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.572998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.573012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.573021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.573026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.573039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.582933] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.583005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.583019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.583025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.583030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.583043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.592960] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.593030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.593045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.593051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.593056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.593070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.603004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.603085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.603099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.603105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.603110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.603124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.613019] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.613115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.613130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.613135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.613141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.613154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.623079] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.228 [2024-04-19 04:16:18.623144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.228 [2024-04-19 04:16:18.623158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.228 [2024-04-19 04:16:18.623164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.228 [2024-04-19 04:16:18.623170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.228 [2024-04-19 04:16:18.623183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.228 qpair failed and we were unable to recover it. 00:25:04.228 [2024-04-19 04:16:18.633079] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.633150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.633163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.633169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.633175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.633188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.643116] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.643190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.643204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.643209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.643214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.643228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.653155] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.653226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.653241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.653247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.653252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.653266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.663130] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.663197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.663212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.663222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.663227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.663240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.673205] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.673272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.673287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.673293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.673298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.673311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.683289] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.683364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.683378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.683384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.683389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.683402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.693286] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.693364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.693378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.693384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.693390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.693403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.703301] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.703413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.703427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.703433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.703438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.703452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.713272] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.713351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.713366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.713372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.713377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.713391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.723387] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.723492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.723506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.723511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.723516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.723529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.733374] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.733448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.733462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.733468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.733473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.733487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.743448] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.743533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.743547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.743553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.743558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.743571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.229 [2024-04-19 04:16:18.753486] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.229 [2024-04-19 04:16:18.753568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.229 [2024-04-19 04:16:18.753593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.229 [2024-04-19 04:16:18.753603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.229 [2024-04-19 04:16:18.753611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.229 [2024-04-19 04:16:18.753631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.229 qpair failed and we were unable to recover it. 00:25:04.488 [2024-04-19 04:16:18.763462] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.488 [2024-04-19 04:16:18.763541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.488 [2024-04-19 04:16:18.763558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.488 [2024-04-19 04:16:18.763564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.488 [2024-04-19 04:16:18.763570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.488 [2024-04-19 04:16:18.763584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.488 qpair failed and we were unable to recover it. 00:25:04.488 [2024-04-19 04:16:18.773479] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.488 [2024-04-19 04:16:18.773554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.488 [2024-04-19 04:16:18.773569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.488 [2024-04-19 04:16:18.773575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.488 [2024-04-19 04:16:18.773580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.488 [2024-04-19 04:16:18.773595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.488 qpair failed and we were unable to recover it. 00:25:04.488 [2024-04-19 04:16:18.783590] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.488 [2024-04-19 04:16:18.783671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.488 [2024-04-19 04:16:18.783685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.488 [2024-04-19 04:16:18.783692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.488 [2024-04-19 04:16:18.783697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.488 [2024-04-19 04:16:18.783710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.488 qpair failed and we were unable to recover it. 00:25:04.488 [2024-04-19 04:16:18.793578] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.488 [2024-04-19 04:16:18.793650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.793665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.793673] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.793678] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.793695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.803511] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.803631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.803645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.803651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.803656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.803670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.813604] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.813674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.813688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.813694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.813699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.813712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.823584] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.823684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.823698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.823704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.823709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.823722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.833675] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.833743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.833757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.833762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.833768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.833781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.843685] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.843753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.843770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.843776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.843781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.843794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.853648] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.853729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.853743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.853749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.853754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.853768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.863752] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.863824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.863838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.863844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.863849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.863862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.873692] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.873765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.873779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.873785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.873790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.873803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.883931] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.884051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.884066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.884072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.884080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.884094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.893897] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.893975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.893990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.893996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.894002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.894015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.903942] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.904009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.904023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.904029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.904035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.904048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.913942] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.914008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.914023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.914029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.914035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:04.489 [2024-04-19 04:16:18.914048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.923981] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.924128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.924186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.924212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.924233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.924279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.934004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.934134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.934167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.934183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.934196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.934227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.944005] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.944092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.944116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.944127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.944137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.944158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.954048] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.954163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.954186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.954196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.954205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.954224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.964109] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.964244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.964267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.964277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.964286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.964305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.974100] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.974189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.974212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.974222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.974234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.974254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.984060] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.984181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.984204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.984214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.984222] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.984242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:18.994102] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:18.994223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:18.994245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:18.994255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:18.994264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.489 [2024-04-19 04:16:18.994283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.489 qpair failed and we were unable to recover it. 00:25:04.489 [2024-04-19 04:16:19.004186] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.489 [2024-04-19 04:16:19.004274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.489 [2024-04-19 04:16:19.004296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.489 [2024-04-19 04:16:19.004306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.489 [2024-04-19 04:16:19.004315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.490 [2024-04-19 04:16:19.004334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.490 qpair failed and we were unable to recover it. 00:25:04.490 [2024-04-19 04:16:19.014206] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.748 [2024-04-19 04:16:19.014296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.748 [2024-04-19 04:16:19.014318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.748 [2024-04-19 04:16:19.014328] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.748 [2024-04-19 04:16:19.014337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.748 [2024-04-19 04:16:19.014367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-04-19 04:16:19.024268] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.748 [2024-04-19 04:16:19.024366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.748 [2024-04-19 04:16:19.024388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.748 [2024-04-19 04:16:19.024398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.748 [2024-04-19 04:16:19.024406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.748 [2024-04-19 04:16:19.024426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-04-19 04:16:19.034190] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.748 [2024-04-19 04:16:19.034276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.748 [2024-04-19 04:16:19.034297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.748 [2024-04-19 04:16:19.034307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.748 [2024-04-19 04:16:19.034316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.748 [2024-04-19 04:16:19.034335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.044372] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.044503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.044525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.044535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.044543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.044562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.054325] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.054421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.054443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.054453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.054462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.054481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.064383] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.064505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.064527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.064537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.064549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.064570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.074382] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.074465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.074487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.074497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.074505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.074525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.084369] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.084456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.084477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.084487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.084495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.084515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.094517] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.094634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.094657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.094667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.094676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.094695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.104489] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.104571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.104592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.104602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.104611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.104629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.114501] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.114629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.114652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.114662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.114671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.114691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.124476] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.124570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.124592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.124603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.124611] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.124633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.134575] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.134667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.134689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.134699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.134707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.134726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.144595] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.144685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.144706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.144716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.144725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.144744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.154602] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.154715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.154737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.154754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.154763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.154782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.164676] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.164760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.164782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.749 [2024-04-19 04:16:19.164792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.749 [2024-04-19 04:16:19.164801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.749 [2024-04-19 04:16:19.164820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-04-19 04:16:19.174674] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.749 [2024-04-19 04:16:19.174760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.749 [2024-04-19 04:16:19.174782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.174792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.174800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.174819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.184654] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.184783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.184805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.184815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.184824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.184844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.194758] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.194842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.194863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.194873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.194882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.194901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.204837] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.204924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.204945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.204955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.204963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.204982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.214788] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.214916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.214938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.214948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.214956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.214975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.224767] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.224855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.224877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.224887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.224896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.224915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.234838] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.234967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.234988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.234998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.235007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.235026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.244908] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.244996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.245017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.245032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.245041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.245060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.254976] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.255101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.255123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.255134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.255142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.255163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-04-19 04:16:19.264903] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.750 [2024-04-19 04:16:19.264996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.750 [2024-04-19 04:16:19.265017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.750 [2024-04-19 04:16:19.265027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.750 [2024-04-19 04:16:19.265036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:04.750 [2024-04-19 04:16:19.265054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.750 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.274928] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.275190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.275212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.275222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.275231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.275251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.284988] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.285077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.285099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.285109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.285117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.285137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.295105] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.295225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.295247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.295257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.295265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.295285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.305113] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.305195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.305216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.305226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.305234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.305253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.315125] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.315245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.315266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.315276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.315284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.315303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.325251] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.325349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.325372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.325382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.325391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.325411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.335123] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.335213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.335234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.335249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.335257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.335276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.345155] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.345234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.345256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.345266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.345275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.345294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.355190] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.009 [2024-04-19 04:16:19.355309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.009 [2024-04-19 04:16:19.355332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.009 [2024-04-19 04:16:19.355341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.009 [2024-04-19 04:16:19.355356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.009 [2024-04-19 04:16:19.355376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.009 qpair failed and we were unable to recover it. 00:25:05.009 [2024-04-19 04:16:19.365225] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.365312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.365333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.365349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.365358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.365377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.375300] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.375400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.375423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.375433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.375441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.375460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.385338] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.385467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.385487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.385497] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.385505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.385525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.395321] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.395458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.395479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.395488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.395497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.395516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.405354] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.405441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.405461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.405470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.405479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.405497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.415434] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.415521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.415542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.415552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.415560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.415579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.425474] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.425558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.425581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.425591] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.425599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.425618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.435446] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.435535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.435555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.435565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.435573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.435591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.445466] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.445559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.445579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.445589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.445597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.445616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.455482] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.455574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.455595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.455604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.455612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.455631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.465641] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.465768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.465787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.465797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.465805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.465825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.475541] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.475626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.475646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.475655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.475664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.475682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.485667] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.485755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.485775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.485784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.485793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.010 [2024-04-19 04:16:19.485811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.010 qpair failed and we were unable to recover it. 00:25:05.010 [2024-04-19 04:16:19.495701] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.010 [2024-04-19 04:16:19.495787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.010 [2024-04-19 04:16:19.495807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.010 [2024-04-19 04:16:19.495816] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.010 [2024-04-19 04:16:19.495824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.011 [2024-04-19 04:16:19.495843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-04-19 04:16:19.505728] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.011 [2024-04-19 04:16:19.505811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.011 [2024-04-19 04:16:19.505830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.011 [2024-04-19 04:16:19.505840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.011 [2024-04-19 04:16:19.505848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.011 [2024-04-19 04:16:19.505867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-04-19 04:16:19.515715] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.011 [2024-04-19 04:16:19.515836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.011 [2024-04-19 04:16:19.515860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.011 [2024-04-19 04:16:19.515871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.011 [2024-04-19 04:16:19.515879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.011 [2024-04-19 04:16:19.515897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.011 [2024-04-19 04:16:19.525806] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.011 [2024-04-19 04:16:19.525928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.011 [2024-04-19 04:16:19.525948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.011 [2024-04-19 04:16:19.525958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.011 [2024-04-19 04:16:19.525967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.011 [2024-04-19 04:16:19.525986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.011 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.535837] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.535937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.535958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.535967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.535976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.535995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.545832] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.545917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.545937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.545947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.545955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.545974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.555860] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.555935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.555955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.555965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.555974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.555996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.565916] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.566010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.566030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.566039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.566048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.566066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.575908] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.575997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.576017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.576026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.576035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.576053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.585911] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.586032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.586052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.586061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.586070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.586089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.596004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.596085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.596104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.596114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.596123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.596142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.606034] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.606207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.606234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.606245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.606253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.606274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.616047] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.616140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.616161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.616170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.616178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.616198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.626117] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.626212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.626232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.626242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.626250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.270 [2024-04-19 04:16:19.626269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.270 qpair failed and we were unable to recover it. 00:25:05.270 [2024-04-19 04:16:19.636127] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.270 [2024-04-19 04:16:19.636209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.270 [2024-04-19 04:16:19.636230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.270 [2024-04-19 04:16:19.636239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.270 [2024-04-19 04:16:19.636248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.636266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.646120] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.646213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.646233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.646243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.646251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.646274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.656161] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.656248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.656268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.656277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.656286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.656305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.666201] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.666323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.666348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.666359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.666367] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.666386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.676247] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.676350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.676370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.676380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.676389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.676408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.686286] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.686373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.686394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.686403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.686412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.686430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.696224] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.696310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.696333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.696349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.696358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.696377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.706325] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.706412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.706432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.706441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.706450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.706469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.716320] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.716405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.716430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.716440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.716449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.716468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.726288] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.726383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.726403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.726413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.726421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.726440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.736393] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.736475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.736497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.736506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.736515] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.736537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.746462] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.746551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.746571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.746580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.746588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.746607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.756365] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.756476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.756496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.756506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.756514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.756533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.766492] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.271 [2024-04-19 04:16:19.766580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.271 [2024-04-19 04:16:19.766599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.271 [2024-04-19 04:16:19.766609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.271 [2024-04-19 04:16:19.766617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.271 [2024-04-19 04:16:19.766636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.271 qpair failed and we were unable to recover it. 00:25:05.271 [2024-04-19 04:16:19.776530] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.272 [2024-04-19 04:16:19.776622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.272 [2024-04-19 04:16:19.776642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.272 [2024-04-19 04:16:19.776651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.272 [2024-04-19 04:16:19.776659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.272 [2024-04-19 04:16:19.776678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.272 [2024-04-19 04:16:19.786607] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.272 [2024-04-19 04:16:19.786696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.272 [2024-04-19 04:16:19.786720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.272 [2024-04-19 04:16:19.786729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.272 [2024-04-19 04:16:19.786737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.272 [2024-04-19 04:16:19.786757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.272 qpair failed and we were unable to recover it. 00:25:05.530 [2024-04-19 04:16:19.796608] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.530 [2024-04-19 04:16:19.796702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.530 [2024-04-19 04:16:19.796722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.530 [2024-04-19 04:16:19.796731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.530 [2024-04-19 04:16:19.796739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.530 [2024-04-19 04:16:19.796759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.530 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.806611] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.806736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.806756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.806766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.806774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.806794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.816621] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.816714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.816734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.816743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.816752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.816771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.826676] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.826819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.826839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.826848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.826863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.826883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.836681] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.836761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.836781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.836790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.836799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.836817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.846690] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.846809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.846829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.846838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.846847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.846866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.856690] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.856778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.856799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.856808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.856816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.856836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.866742] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.866872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.866892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.866901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.866910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.866928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.876807] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.876933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.876953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.876962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.876970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.876990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.886848] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.886937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.886957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.886966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.886975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.886993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.896826] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.896917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.896937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.896946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.896954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.896973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.906887] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.906977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.906997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.907006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.907014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.907033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.916922] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.917005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.917025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.917034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.917047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.917065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.926970] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.927062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.927082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.531 [2024-04-19 04:16:19.927091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.531 [2024-04-19 04:16:19.927099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.531 [2024-04-19 04:16:19.927118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.531 qpair failed and we were unable to recover it. 00:25:05.531 [2024-04-19 04:16:19.937026] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.531 [2024-04-19 04:16:19.937112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.531 [2024-04-19 04:16:19.937132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.937141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.937150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.937168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.947030] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.947146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.947167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.947176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.947184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.947204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.957051] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.957150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.957171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.957180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.957188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.957207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.967069] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.967159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.967179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.967188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.967197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.967216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.977113] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.977203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.977223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.977232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.977240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.977260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.987069] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.987147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.987166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.987175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.987184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.987203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:19.997153] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:19.997240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:19.997260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:19.997270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:19.997278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:19.997296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:20.007183] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:20.007269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:20.007290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:20.007300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:20.007312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:20.007332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:20.017198] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:20.017285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:20.017306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:20.017316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:20.017325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:20.017350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:20.027183] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:20.027268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:20.027289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:20.027299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:20.027307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:20.027327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:20.037297] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:20.037386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:20.037406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:20.037416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:20.037425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:20.037444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.532 [2024-04-19 04:16:20.047270] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.532 [2024-04-19 04:16:20.047373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.532 [2024-04-19 04:16:20.047394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.532 [2024-04-19 04:16:20.047404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.532 [2024-04-19 04:16:20.047412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.532 [2024-04-19 04:16:20.047431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.532 qpair failed and we were unable to recover it. 00:25:05.791 [2024-04-19 04:16:20.057335] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.791 [2024-04-19 04:16:20.057437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.791 [2024-04-19 04:16:20.057460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.791 [2024-04-19 04:16:20.057470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.791 [2024-04-19 04:16:20.057479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.791 [2024-04-19 04:16:20.057499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.791 qpair failed and we were unable to recover it. 00:25:05.791 [2024-04-19 04:16:20.067382] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.791 [2024-04-19 04:16:20.067473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.791 [2024-04-19 04:16:20.067494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.791 [2024-04-19 04:16:20.067504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.791 [2024-04-19 04:16:20.067512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.791 [2024-04-19 04:16:20.067532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.791 qpair failed and we were unable to recover it. 00:25:05.791 [2024-04-19 04:16:20.077390] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.791 [2024-04-19 04:16:20.077475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.791 [2024-04-19 04:16:20.077495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.791 [2024-04-19 04:16:20.077505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.791 [2024-04-19 04:16:20.077513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.791 [2024-04-19 04:16:20.077532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.791 qpair failed and we were unable to recover it. 00:25:05.791 [2024-04-19 04:16:20.087423] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.791 [2024-04-19 04:16:20.087509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.791 [2024-04-19 04:16:20.087529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.791 [2024-04-19 04:16:20.087539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.791 [2024-04-19 04:16:20.087547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.791 [2024-04-19 04:16:20.087566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.791 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.097378] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.097466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.097486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.097499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.097508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.097527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.107482] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.107572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.107596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.107607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.107616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.107637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.117523] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.117630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.117652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.117662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.117671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.117691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.127544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.127664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.127684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.127694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.127702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.127722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.137611] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.137736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.137757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.137766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.137774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.137793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.147640] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.147723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.147744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.147753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.147761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.147780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.157632] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.157725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.157747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.157757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.157766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.157785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.167664] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.167761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.167781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.167790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.167799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.167818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.177723] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.177814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.177834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.177844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.177852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.177871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.187706] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.187789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.187809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.187823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.187831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.187850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.197782] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.197877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.197897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.197906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.197914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.197933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.207758] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.207845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.207865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.207874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.207882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.207900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.217783] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.217890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.792 [2024-04-19 04:16:20.217910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.792 [2024-04-19 04:16:20.217919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.792 [2024-04-19 04:16:20.217928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.792 [2024-04-19 04:16:20.217947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.792 qpair failed and we were unable to recover it. 00:25:05.792 [2024-04-19 04:16:20.227861] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.792 [2024-04-19 04:16:20.228030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.228049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.228058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.228067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.228086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.237836] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.237919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.237939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.237948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.237956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.237975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.247946] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.248039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.248059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.248068] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.248076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.248095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.257923] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.258012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.258031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.258041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.258049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.258068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.267943] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.268030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.268049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.268058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.268066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.268085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.277970] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.278055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.278074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.278088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.278096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.278115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.287958] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.288057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.288077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.288086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.288095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.288113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.298049] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.298138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.298159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.298168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.298177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.298196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:05.793 [2024-04-19 04:16:20.308042] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.793 [2024-04-19 04:16:20.308130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.793 [2024-04-19 04:16:20.308149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.793 [2024-04-19 04:16:20.308158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.793 [2024-04-19 04:16:20.308167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:05.793 [2024-04-19 04:16:20.308186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.793 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.318032] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.318112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.318132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.318142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.318151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.318170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.328176] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.328265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.328286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.328296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.328304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.328323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.338100] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.338185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.338205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.338215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.338223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.338243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.348235] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.348365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.348385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.348394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.348403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.348422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.358210] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.358293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.358313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.358323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.358331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.358380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.368235] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.368358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.368382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.368393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.368401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.368420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.378293] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.378383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.378404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.378414] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.378422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.378441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.388306] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.052 [2024-04-19 04:16:20.388395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.052 [2024-04-19 04:16:20.388416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.052 [2024-04-19 04:16:20.388426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.052 [2024-04-19 04:16:20.388434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.052 [2024-04-19 04:16:20.388453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-04-19 04:16:20.398358] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.398454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.398474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.398483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.398491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.398510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.408379] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.408470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.408491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.408500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.408509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.408528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.418430] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.418520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.418541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.418552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.418560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.418579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.428426] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.428509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.428529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.428538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.428547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.428566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.438453] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.438535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.438554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.438563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.438572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.438591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.448525] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.448620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.448640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.448649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.448658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.448677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.458504] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.458589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.458613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.458623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.458631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.458650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.468544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.468629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.468650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.468660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.468668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.468687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.478496] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.478585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.478606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.478615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.478623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.478642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.488536] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.488625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.488645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.488655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.488665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.488684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.498657] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.498751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.498771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.498780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.498789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.498812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.508663] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.508751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.508770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.508779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.508788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.053 [2024-04-19 04:16:20.508806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-04-19 04:16:20.518659] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.053 [2024-04-19 04:16:20.518776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.053 [2024-04-19 04:16:20.518797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.053 [2024-04-19 04:16:20.518806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.053 [2024-04-19 04:16:20.518814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.518833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-04-19 04:16:20.528722] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.054 [2024-04-19 04:16:20.528805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.054 [2024-04-19 04:16:20.528825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.054 [2024-04-19 04:16:20.528834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.054 [2024-04-19 04:16:20.528843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.528862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-04-19 04:16:20.538718] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.054 [2024-04-19 04:16:20.538805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.054 [2024-04-19 04:16:20.538825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.054 [2024-04-19 04:16:20.538834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.054 [2024-04-19 04:16:20.538842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.538861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-04-19 04:16:20.548787] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.054 [2024-04-19 04:16:20.548873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.054 [2024-04-19 04:16:20.548898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.054 [2024-04-19 04:16:20.548907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.054 [2024-04-19 04:16:20.548915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.548934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-04-19 04:16:20.558765] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.054 [2024-04-19 04:16:20.558880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.054 [2024-04-19 04:16:20.558900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.054 [2024-04-19 04:16:20.558910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.054 [2024-04-19 04:16:20.558918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.558936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-04-19 04:16:20.568830] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.054 [2024-04-19 04:16:20.568922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.054 [2024-04-19 04:16:20.568943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.054 [2024-04-19 04:16:20.568952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.054 [2024-04-19 04:16:20.568961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.054 [2024-04-19 04:16:20.568981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.578871] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.578959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.578979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.578988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.578997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.579016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.588816] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.588908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.588927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.588937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.588945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.588970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.598840] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.598929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.598951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.598961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.598970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.598990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.608867] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.608951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.608971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.608981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.608989] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.609010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.618962] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.619049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.619070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.619080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.619088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.619107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.629020] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.629135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.629155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.629164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.629173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.629191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.639090] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.639207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.639232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.639242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.639249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.639269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.649000] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.649088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.649108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.649118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.649126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.649145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.659112] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.659237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.659257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.659266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.659275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.659294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.669150] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.669235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.669255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.669264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.669273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.669292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.679188] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.679271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.679291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.679301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.679309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.679332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.689197] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.689327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.689352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.689362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.689370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.689390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.312 [2024-04-19 04:16:20.699122] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.312 [2024-04-19 04:16:20.699208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.312 [2024-04-19 04:16:20.699229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.312 [2024-04-19 04:16:20.699238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.312 [2024-04-19 04:16:20.699247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.312 [2024-04-19 04:16:20.699265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.312 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.709229] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.709313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.709333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.709347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.709356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.709375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.719302] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.719390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.719412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.719422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.719431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.719450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.729231] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.729320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.729349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.729360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.729368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.729387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.739324] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.739419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.739438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.739448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.739456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.739475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.749315] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.749401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.749421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.749431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.749439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.749458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.759317] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.759405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.759425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.759435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.759443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.759462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.769408] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.769505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.769525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.769535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.769547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.769566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.779450] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.779581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.779602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.779611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.779620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.779639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.789463] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.789582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.789602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.789611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.789620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.789639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.799426] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.799566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.799586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.799596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.799604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.799624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.809545] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.809635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.809655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.809664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.809672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.809691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.819496] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.819585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.819606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.819615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.819623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.819642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.313 [2024-04-19 04:16:20.829604] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.313 [2024-04-19 04:16:20.829691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.313 [2024-04-19 04:16:20.829712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.313 [2024-04-19 04:16:20.829722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.313 [2024-04-19 04:16:20.829730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.313 [2024-04-19 04:16:20.829748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.313 qpair failed and we were unable to recover it. 00:25:06.571 [2024-04-19 04:16:20.839542] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.571 [2024-04-19 04:16:20.839672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.571 [2024-04-19 04:16:20.839691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.571 [2024-04-19 04:16:20.839701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.571 [2024-04-19 04:16:20.839709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.571 [2024-04-19 04:16:20.839727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.571 qpair failed and we were unable to recover it. 00:25:06.571 [2024-04-19 04:16:20.849640] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.571 [2024-04-19 04:16:20.849730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.571 [2024-04-19 04:16:20.849750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.571 [2024-04-19 04:16:20.849759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.571 [2024-04-19 04:16:20.849768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.571 [2024-04-19 04:16:20.849786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.571 qpair failed and we were unable to recover it. 00:25:06.571 [2024-04-19 04:16:20.859641] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.571 [2024-04-19 04:16:20.859763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.571 [2024-04-19 04:16:20.859783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.571 [2024-04-19 04:16:20.859792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.571 [2024-04-19 04:16:20.859805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.571 [2024-04-19 04:16:20.859824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.571 qpair failed and we were unable to recover it. 00:25:06.571 [2024-04-19 04:16:20.869768] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.571 [2024-04-19 04:16:20.869890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.571 [2024-04-19 04:16:20.869910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.571 [2024-04-19 04:16:20.869919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.571 [2024-04-19 04:16:20.869928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.571 [2024-04-19 04:16:20.869946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.571 qpair failed and we were unable to recover it. 00:25:06.571 [2024-04-19 04:16:20.879737] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.879827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.879847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.879856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.879865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.879883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.889736] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.889842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.889862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.889871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.889880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.889899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.899815] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.899913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.899933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.899942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.899950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.899969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.909807] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.909898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.909918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.909927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.909935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.909954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.919913] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.920034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.920054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.920064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.920072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.920091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.929903] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.929996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.930016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.930025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.930034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.930053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.939902] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.939987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.940007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.940017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.940025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.940044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.949901] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.950028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.950048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.950057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.950069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.950088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.959961] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.960042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.960062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.960072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.960080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.960099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.969999] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.970089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.970110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.970119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.970128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.970147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.980015] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.980100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.980120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.980130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.980138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.980157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:20.990037] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:20.990119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:20.990140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:20.990150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:20.990158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:20.990178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:21.000076] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:21.000163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:21.000183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:21.000192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:21.000201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:21.000220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:21.010127] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:21.010236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:21.010256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:21.010266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:21.010274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:21.010293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:21.020127] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:21.020229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:21.020249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:21.020258] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:21.020267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:21.020286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.572 qpair failed and we were unable to recover it. 00:25:06.572 [2024-04-19 04:16:21.030152] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.572 [2024-04-19 04:16:21.030276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.572 [2024-04-19 04:16:21.030296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.572 [2024-04-19 04:16:21.030305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.572 [2024-04-19 04:16:21.030314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.572 [2024-04-19 04:16:21.030333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.040228] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.040313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.040332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.040351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.040360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.040380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.050170] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.050261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.050281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.050290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.050298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.050317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.060206] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.060294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.060314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.060323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.060332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.060356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.070296] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.070395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.070416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.070425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.070434] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.070453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.080302] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.080419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.080440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.080450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.080457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.080477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.573 [2024-04-19 04:16:21.090335] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.573 [2024-04-19 04:16:21.090441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.573 [2024-04-19 04:16:21.090460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.573 [2024-04-19 04:16:21.090470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.573 [2024-04-19 04:16:21.090478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.573 [2024-04-19 04:16:21.090497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.573 qpair failed and we were unable to recover it. 00:25:06.831 [2024-04-19 04:16:21.100369] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.831 [2024-04-19 04:16:21.100489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.831 [2024-04-19 04:16:21.100509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.831 [2024-04-19 04:16:21.100519] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.831 [2024-04-19 04:16:21.100527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.100546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.110479] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.110566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.110586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.110596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.110604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.110623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.120469] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.120553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.120573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.120582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.120591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.120610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.130467] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.130559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.130579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.130593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.130601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.130621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.140518] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.140603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.140623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.140633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.140641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.140660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.150518] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.150604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.150624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.150633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.150641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.150660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.160558] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.160641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.160662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.160671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.160680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.160699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.170610] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.170703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.170723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.170732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.170740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.170759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.180605] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.180698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.180718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.180728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.180736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.180755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.190641] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.190732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.190752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.190762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.190770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.190790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.200670] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.200753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.200772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.200782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.200790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.200809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.210744] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.210874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.210894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.210903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.210912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.210931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.220718] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.220808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.220828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.220842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.220851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.220869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.230793] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.832 [2024-04-19 04:16:21.230877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.832 [2024-04-19 04:16:21.230896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.832 [2024-04-19 04:16:21.230905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.832 [2024-04-19 04:16:21.230913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.832 [2024-04-19 04:16:21.230932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.832 qpair failed and we were unable to recover it. 00:25:06.832 [2024-04-19 04:16:21.240786] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.240867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.240887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.240896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.240904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.240923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.250773] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.250862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.250882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.250891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.250900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.250919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.260854] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.260946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.260965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.260974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.260983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.261001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.270837] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.270928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.270947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.270956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.270965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.270984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.280927] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.281007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.281026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.281035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.281044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.281062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.290930] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.291014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.291034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.291043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.291052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.291070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.300968] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.301055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.301075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.301084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.301092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.301112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.310983] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.311140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.311164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.311173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.311182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.311201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.321008] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.321092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.321112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.321121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.321129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.321148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.331055] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.331141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.331160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.331170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.331178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.331197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.341136] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.341223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.341244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.341253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.341262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.341281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:06.833 [2024-04-19 04:16:21.351122] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.833 [2024-04-19 04:16:21.351207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.833 [2024-04-19 04:16:21.351226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.833 [2024-04-19 04:16:21.351236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.833 [2024-04-19 04:16:21.351244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:06.833 [2024-04-19 04:16:21.351263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.833 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.361149] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.361264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.361284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.361293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.361302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.361321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.371176] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.371301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.371321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.371330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.371338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.371363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.381124] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.381219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.381239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.381248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.381257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.381276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.391261] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.391383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.391403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.391413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.391421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.391441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.401257] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.401338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.401370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.401380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.401388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.401408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.411212] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.411298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.411318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.411327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.411336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.411361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.421309] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.421395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.421415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.421425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.421433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.421452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.431357] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.431444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.092 [2024-04-19 04:16:21.431464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.092 [2024-04-19 04:16:21.431473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.092 [2024-04-19 04:16:21.431481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.092 [2024-04-19 04:16:21.431500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.092 qpair failed and we were unable to recover it. 00:25:07.092 [2024-04-19 04:16:21.441374] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.092 [2024-04-19 04:16:21.441456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.441476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.441485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.441494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.441516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.451413] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.451502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.451522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.451532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.451540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.451559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.461420] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.461507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.461527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.461536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.461545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.461564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.471470] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.471553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.471573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.471583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.471591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.471610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.481494] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.481576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.481596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.481605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.481613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.481632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.491547] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.491635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.491658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.491668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.491676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.491695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.501600] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.501686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.501706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.501715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.501723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.501742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.511598] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.511682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.511701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.511711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.511719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.511738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.521613] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.521698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.521718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.521728] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.521736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.521755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.531660] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.531745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.531765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.531774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.531782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.531805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.541622] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.541755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.541775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.541784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.541793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.541811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.551679] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.551772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.551792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.551801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.551810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.551829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.561734] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.561826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.561845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.561854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.561862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.561881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.093 qpair failed and we were unable to recover it. 00:25:07.093 [2024-04-19 04:16:21.571773] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.093 [2024-04-19 04:16:21.571859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.093 [2024-04-19 04:16:21.571879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.093 [2024-04-19 04:16:21.571888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.093 [2024-04-19 04:16:21.571896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.093 [2024-04-19 04:16:21.571915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.094 qpair failed and we were unable to recover it. 00:25:07.094 [2024-04-19 04:16:21.581757] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.094 [2024-04-19 04:16:21.581846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.094 [2024-04-19 04:16:21.581870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.094 [2024-04-19 04:16:21.581879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.094 [2024-04-19 04:16:21.581887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.094 [2024-04-19 04:16:21.581906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.094 qpair failed and we were unable to recover it. 00:25:07.094 [2024-04-19 04:16:21.591845] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.094 [2024-04-19 04:16:21.591930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.094 [2024-04-19 04:16:21.591949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.094 [2024-04-19 04:16:21.591959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.094 [2024-04-19 04:16:21.591967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.094 [2024-04-19 04:16:21.591986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.094 qpair failed and we were unable to recover it. 00:25:07.094 [2024-04-19 04:16:21.601847] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.094 [2024-04-19 04:16:21.601928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.094 [2024-04-19 04:16:21.601950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.094 [2024-04-19 04:16:21.601959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.094 [2024-04-19 04:16:21.601968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.094 [2024-04-19 04:16:21.601987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.094 qpair failed and we were unable to recover it. 00:25:07.094 [2024-04-19 04:16:21.611871] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.094 [2024-04-19 04:16:21.611990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.094 [2024-04-19 04:16:21.612011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.094 [2024-04-19 04:16:21.612021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.094 [2024-04-19 04:16:21.612030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.094 [2024-04-19 04:16:21.612049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.094 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.621897] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.621991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.622011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.622020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.622029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.622051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.631964] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.632049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.632069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.632078] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.632086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.632106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.641946] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.642056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.642075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.642085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.642093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.642112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.652004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.652092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.652112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.652122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.652130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.652149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.662038] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.662132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.662152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.662161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.662169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.662188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.672087] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.672177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.672200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.672210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.672218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.672238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.682016] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.682163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.682183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.682193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.682201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.352 [2024-04-19 04:16:21.682220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.352 qpair failed and we were unable to recover it. 00:25:07.352 [2024-04-19 04:16:21.692121] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.352 [2024-04-19 04:16:21.692208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.352 [2024-04-19 04:16:21.692229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.352 [2024-04-19 04:16:21.692238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.352 [2024-04-19 04:16:21.692246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.692265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.702090] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.702174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.702194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.702203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.702211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.702229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.712165] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.712246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.712266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.712275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.712288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.712307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.722207] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.722288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.722308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.722317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.722325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.722348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.732261] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.732395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.732414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.732424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.732432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.732451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.742253] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.742354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.742374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.742384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.742392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.742411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.752288] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.752376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.752396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.752406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.752414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.752433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.762320] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.762428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.762449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.762458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.762467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.762485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.772359] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.772446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.772465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.772475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.772483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.772502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.782407] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.782502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.782521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.782531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.782539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.782557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.792414] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.792501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.792521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.792530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.792538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.792557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.802434] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.802556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.802576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.802585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.802598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.802617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.812427] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.812521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.812540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.812549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.812558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.812577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.822498] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.822617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.353 [2024-04-19 04:16:21.822638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.353 [2024-04-19 04:16:21.822647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.353 [2024-04-19 04:16:21.822656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.353 [2024-04-19 04:16:21.822675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.353 qpair failed and we were unable to recover it. 00:25:07.353 [2024-04-19 04:16:21.832532] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.353 [2024-04-19 04:16:21.832671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.354 [2024-04-19 04:16:21.832691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.354 [2024-04-19 04:16:21.832700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.354 [2024-04-19 04:16:21.832708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.354 [2024-04-19 04:16:21.832727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.354 qpair failed and we were unable to recover it. 00:25:07.354 [2024-04-19 04:16:21.842563] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.354 [2024-04-19 04:16:21.842649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.354 [2024-04-19 04:16:21.842669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.354 [2024-04-19 04:16:21.842678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.354 [2024-04-19 04:16:21.842687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.354 [2024-04-19 04:16:21.842705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.354 qpair failed and we were unable to recover it. 00:25:07.354 [2024-04-19 04:16:21.852600] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.354 [2024-04-19 04:16:21.852727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.354 [2024-04-19 04:16:21.852748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.354 [2024-04-19 04:16:21.852757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.354 [2024-04-19 04:16:21.852765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.354 [2024-04-19 04:16:21.852785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.354 qpair failed and we were unable to recover it. 00:25:07.354 [2024-04-19 04:16:21.862566] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.354 [2024-04-19 04:16:21.862653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.354 [2024-04-19 04:16:21.862673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.354 [2024-04-19 04:16:21.862682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.354 [2024-04-19 04:16:21.862690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.354 [2024-04-19 04:16:21.862709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.354 qpair failed and we were unable to recover it. 00:25:07.354 [2024-04-19 04:16:21.872662] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.354 [2024-04-19 04:16:21.872754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.354 [2024-04-19 04:16:21.872774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.354 [2024-04-19 04:16:21.872783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.354 [2024-04-19 04:16:21.872792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.354 [2024-04-19 04:16:21.872811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.354 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.882728] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.882815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.882834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.882844] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.882852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.882871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.892714] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.892809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.892829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.892838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.892850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.892870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.902727] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.902815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.902835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.902845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.902854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.902872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.912829] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.912961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.912981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.912990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.912999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.913017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.922778] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.922864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.922883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.922893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.922901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.922919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.932823] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.932911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.932932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.932941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.932949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd1ef90 00:25:07.613 [2024-04-19 04:16:21.932968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.942864] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.942966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.942999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.943014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.943025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.613 [2024-04-19 04:16:21.943053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.952833] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.952916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.952937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.952947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.952956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.613 [2024-04-19 04:16:21.952977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.962914] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.963043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.963064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.963075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.963083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.613 [2024-04-19 04:16:21.963106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.972938] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.973062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.973083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.973092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.973101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.613 [2024-04-19 04:16:21.973122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.982916] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.983000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.613 [2024-04-19 04:16:21.983020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.613 [2024-04-19 04:16:21.983034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.613 [2024-04-19 04:16:21.983043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.613 [2024-04-19 04:16:21.983063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.613 qpair failed and we were unable to recover it. 00:25:07.613 [2024-04-19 04:16:21.992940] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.613 [2024-04-19 04:16:21.993027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:21.993047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:21.993057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:21.993066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:21.993086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.003058] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.003187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.003208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.003217] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.003228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.003249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.013066] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.013151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.013172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.013182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.013191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.013211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.023071] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.023191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.023210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.023220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.023228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.023248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.033125] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.033220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.033239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.033249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.033257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.033277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.043179] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.043351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.043372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.043381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.043389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.043410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.053135] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.053222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.053242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.053251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.053259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.053279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.063125] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.063209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.063229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.063238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.063246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.063265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.073148] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.073231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.073255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.073264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.073273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.073293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.083257] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.083383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.083403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.083413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.083421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.083440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.093223] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.093311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.093330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.093340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.093365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.093386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.103322] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.103414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.103434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.103444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.103452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.103472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.113382] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.113508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.113528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.113537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.113545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.614 [2024-04-19 04:16:22.113566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.614 qpair failed and we were unable to recover it. 00:25:07.614 [2024-04-19 04:16:22.123379] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.614 [2024-04-19 04:16:22.123485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.614 [2024-04-19 04:16:22.123505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.614 [2024-04-19 04:16:22.123515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.614 [2024-04-19 04:16:22.123523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.615 [2024-04-19 04:16:22.123542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.615 qpair failed and we were unable to recover it. 00:25:07.615 [2024-04-19 04:16:22.133408] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.615 [2024-04-19 04:16:22.133496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.615 [2024-04-19 04:16:22.133516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.615 [2024-04-19 04:16:22.133525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.615 [2024-04-19 04:16:22.133533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.615 [2024-04-19 04:16:22.133553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.615 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.143398] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.143529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.143549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.143559] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.143567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.143587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.153490] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.153576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.153597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.153606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.153615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.153635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.163537] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.163648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.163671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.163680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.163688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.163709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.173460] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.173571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.173591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.173600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.173609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.173628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.183590] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.183679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.183699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.183708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.183716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.183735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.193606] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.193725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.193745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.193754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.193763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.193783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.203642] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.203722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.203742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.203752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.203760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.203783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.213660] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.213790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.213810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.213819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.213827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.213848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.223673] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.223758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.223778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.223788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.223796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.223816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.233698] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.233783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.233803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.233813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.233822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.233841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.243657] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.243740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.243761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.243771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.874 [2024-04-19 04:16:22.243779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.874 [2024-04-19 04:16:22.243799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.874 qpair failed and we were unable to recover it. 00:25:07.874 [2024-04-19 04:16:22.253762] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.874 [2024-04-19 04:16:22.253890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.874 [2024-04-19 04:16:22.253914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.874 [2024-04-19 04:16:22.253923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.253932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.253952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.263823] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.263918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.263938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.263947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.263955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.263975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.273849] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.273927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.273946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.273956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.273964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.273984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.283870] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.283992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.284011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.284020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.284029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.284048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.293820] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.293908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.293928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.293937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.293949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.293970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.303842] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.303925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.303945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.303954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.303962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.303982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.313947] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.314037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.314056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.314065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.314073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.314093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.323941] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.324024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.324044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.324054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.324062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.324081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.334000] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.334084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.334103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.334113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.334121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.334141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.344043] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.344141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.344161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.344171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.344179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.344199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.354046] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.354128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.354148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.354158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.354166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.354185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.364100] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.364181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.364201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.364210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.364219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.364238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.374138] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.374226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.374245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.374255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.374263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.374283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.384099] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.384190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.384210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.384224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.384232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.384253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:07.875 [2024-04-19 04:16:22.394185] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.875 [2024-04-19 04:16:22.394271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.875 [2024-04-19 04:16:22.394291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.875 [2024-04-19 04:16:22.394301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.875 [2024-04-19 04:16:22.394309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:07.875 [2024-04-19 04:16:22.394328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.875 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.404215] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.404297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.404316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.404326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.404334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.404359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.414212] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.414352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.414373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.414382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.414390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.414411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.424272] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.424362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.424383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.424392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.424401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.424421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.434358] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.434450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.434471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.434480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.434488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.434508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.444329] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.444428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.444448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.444458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.444467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.444486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.454374] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.454470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.454490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.454499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.454508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.454529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.464403] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.464501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.464521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.464530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.464539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.464559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.474378] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.474469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.474488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.474502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.474511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.474531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.484449] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.484534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.484553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.484562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.134 [2024-04-19 04:16:22.484571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.134 [2024-04-19 04:16:22.484591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.134 qpair failed and we were unable to recover it. 00:25:08.134 [2024-04-19 04:16:22.494484] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.134 [2024-04-19 04:16:22.494573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.134 [2024-04-19 04:16:22.494593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.134 [2024-04-19 04:16:22.494603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.494612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.494632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.504506] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.504599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.504618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.504629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.504637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.504657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.514497] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.514626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.514646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.514655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.514664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.514683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.524523] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.524609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.524628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.524638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.524646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.524665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.534584] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.534670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.534690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.534699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.534708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.534728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.544656] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.544748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.544768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.544782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.544790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.544810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.554603] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.554687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.554707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.554717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.554725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.554744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.564682] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.564806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.564830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.564840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.564847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.564867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.574724] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.574816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.574836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.574846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.574854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.574873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.584717] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.584805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.584825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.584834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.584842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.584862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.594763] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.594850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.594870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.594879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.594887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.594906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.604792] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.604878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.604898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.604908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.604916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.604939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.614797] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.614888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.614908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.614917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.614925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.614945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.624855] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.135 [2024-04-19 04:16:22.624940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.135 [2024-04-19 04:16:22.624959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.135 [2024-04-19 04:16:22.624968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.135 [2024-04-19 04:16:22.624976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.135 [2024-04-19 04:16:22.624996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.135 qpair failed and we were unable to recover it. 00:25:08.135 [2024-04-19 04:16:22.634884] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.136 [2024-04-19 04:16:22.634968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.136 [2024-04-19 04:16:22.634987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.136 [2024-04-19 04:16:22.634996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.136 [2024-04-19 04:16:22.635004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.136 [2024-04-19 04:16:22.635023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.136 qpair failed and we were unable to recover it. 00:25:08.136 [2024-04-19 04:16:22.644891] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.136 [2024-04-19 04:16:22.644976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.136 [2024-04-19 04:16:22.644996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.136 [2024-04-19 04:16:22.645005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.136 [2024-04-19 04:16:22.645013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.136 [2024-04-19 04:16:22.645032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.136 qpair failed and we were unable to recover it. 00:25:08.136 [2024-04-19 04:16:22.654999] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.136 [2024-04-19 04:16:22.655132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.136 [2024-04-19 04:16:22.655155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.136 [2024-04-19 04:16:22.655165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.136 [2024-04-19 04:16:22.655173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.136 [2024-04-19 04:16:22.655193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.136 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.664958] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.665046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.665065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.665074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.665083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.665103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.675011] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.675103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.675122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.675132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.675141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.675160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.685020] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.685108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.685128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.685137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.685145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.685165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.695066] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.695152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.695172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.695181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.695193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.695212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.705082] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.705168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.705187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.705197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.705205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.705225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.715150] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.715238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.715257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.715266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.715274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.715294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.725128] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.725233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.725252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.725261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.725270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.725289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.735203] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.735329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.735356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.735366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.735374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.735395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.745184] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.745278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.745297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.745307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.745315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.745335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.755272] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.755360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.755380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.755390] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.755398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.755418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.765261] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.765348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.765368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.765377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.765386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.395 [2024-04-19 04:16:22.765406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.395 qpair failed and we were unable to recover it. 00:25:08.395 [2024-04-19 04:16:22.775326] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.395 [2024-04-19 04:16:22.775414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.395 [2024-04-19 04:16:22.775434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.395 [2024-04-19 04:16:22.775443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.395 [2024-04-19 04:16:22.775451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.775470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.785307] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.785432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.785452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.785467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.785476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.785497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.795346] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.795431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.795450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.795459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.795468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.795487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.805339] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.805431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.805449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.805459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.805467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.805487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.815415] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.815502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.815521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.815531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.815539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.815559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.825432] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.825520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.825539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.825548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.825556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.825576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.835462] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.835553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.835573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.835583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.835591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.835611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.845503] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.845628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.845648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.845657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.845665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.845685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.855543] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.855630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.855650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.855660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.855668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.855688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.865544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.865651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.865671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.865680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.865689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.865708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.875603] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.875728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.875747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.875760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.875768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.875788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.885622] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.885742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.885762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.885772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.885780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.885800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.895654] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.895783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.895802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.895811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.895819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.895839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.905682] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.396 [2024-04-19 04:16:22.905767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.396 [2024-04-19 04:16:22.905786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.396 [2024-04-19 04:16:22.905796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.396 [2024-04-19 04:16:22.905804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.396 [2024-04-19 04:16:22.905823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.396 qpair failed and we were unable to recover it. 00:25:08.396 [2024-04-19 04:16:22.915738] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.397 [2024-04-19 04:16:22.915868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.397 [2024-04-19 04:16:22.915888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.397 [2024-04-19 04:16:22.915897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.397 [2024-04-19 04:16:22.915905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.397 [2024-04-19 04:16:22.915925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.397 qpair failed and we were unable to recover it. 00:25:08.654 [2024-04-19 04:16:22.925721] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.654 [2024-04-19 04:16:22.925814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.654 [2024-04-19 04:16:22.925834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.654 [2024-04-19 04:16:22.925843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.654 [2024-04-19 04:16:22.925852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.654 [2024-04-19 04:16:22.925871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.654 qpair failed and we were unable to recover it. 00:25:08.654 [2024-04-19 04:16:22.935764] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.654 [2024-04-19 04:16:22.935850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.654 [2024-04-19 04:16:22.935871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.654 [2024-04-19 04:16:22.935881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.654 [2024-04-19 04:16:22.935889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.935908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.945794] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.945876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.945895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.945904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.945912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.945933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.955857] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.955939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.955959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.955968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.955976] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.955996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.965831] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.965919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.965943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.965953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.965961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.965981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.975874] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.975971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.975991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.976001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.976009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.976028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.985909] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.986035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.986055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.986064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.986072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.986093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:22.995942] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:22.996062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:22.996082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:22.996091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:22.996100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:22.996119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.005969] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.006053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.006072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.006082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.006090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.006115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.016010] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.016096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.016115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.016125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.016133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.016152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.026028] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.026110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.026129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.026139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.026147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.026166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.036059] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.036147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.036166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.036176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.036184] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.036204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.046096] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.046182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.046201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.046210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.046218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.046238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.056145] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.056245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.056269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.056278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.056286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.056306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.066137] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.066228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.066247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.066257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.066265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.066285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.076160] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.655 [2024-04-19 04:16:23.076278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.655 [2024-04-19 04:16:23.076298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.655 [2024-04-19 04:16:23.076307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.655 [2024-04-19 04:16:23.076315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.655 [2024-04-19 04:16:23.076335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.655 qpair failed and we were unable to recover it. 00:25:08.655 [2024-04-19 04:16:23.086255] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.086376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.086395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.086405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.086413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.086434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.096351] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.096504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.096523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.096532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.096545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.096566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.106250] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.106335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.106359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.106369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.106377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.106397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.116276] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.116362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.116381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.116391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.116399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.116419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.126287] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.126376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.126395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.126404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.126413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.126432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.136375] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.136461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.136480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.136490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.136498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.136518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.146380] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.146484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.146503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.146512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.146521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.146540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.156406] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.156487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.156507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.156517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.156525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.156546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.166424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.166527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.166547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.166556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.166564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.166584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.656 [2024-04-19 04:16:23.176464] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.656 [2024-04-19 04:16:23.176552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.656 [2024-04-19 04:16:23.176571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.656 [2024-04-19 04:16:23.176580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.656 [2024-04-19 04:16:23.176589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.656 [2024-04-19 04:16:23.176609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.656 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.186495] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.186621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.186644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.186654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.186666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.186688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.196573] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.196658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.196679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.196688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.196697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.196717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.206548] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.206635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.206654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.206664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.206672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.206692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.216571] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.216692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.216712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.216721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.216729] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.216750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.226574] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.226663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.226683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.226692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.226700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.226719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.236642] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.236768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.236788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.236798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.236806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.236825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.246707] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.246793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.246813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.246822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.246831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.246850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.256727] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.256811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.256831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.256840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.256848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.256867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.266737] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.266857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.266876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.266886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.266894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.266913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.276784] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.276867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.276887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.276901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.276910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.276929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.286787] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.286877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.286896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.286905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.914 [2024-04-19 04:16:23.286913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.914 [2024-04-19 04:16:23.286933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.914 qpair failed and we were unable to recover it. 00:25:08.914 [2024-04-19 04:16:23.296832] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.914 [2024-04-19 04:16:23.296918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.914 [2024-04-19 04:16:23.296937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.914 [2024-04-19 04:16:23.296946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.296954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.296973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.306832] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.306918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.306938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.306947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.306956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.306975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.316881] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.317048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.317068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.317077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.317086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.317106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.326910] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.326991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.327010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.327020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.327028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.327047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.336946] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.337030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.337050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.337059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.337067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.337088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.346961] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.347081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.347100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.347109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.347117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.347137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.356999] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.357088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.357108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.357118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.357125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.357145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.367022] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.367108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.367131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.367141] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.367149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.367168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.377062] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.377146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.377165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.377175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.377183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.377202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.387093] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.387184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.387204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.387213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.387221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.387241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.397088] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.397178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.397198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.397207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.397215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.397235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.407144] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.407225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.407244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.407254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.407262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.407286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.417192] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.417368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.417388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.417398] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.417406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.417426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.915 qpair failed and we were unable to recover it. 00:25:08.915 [2024-04-19 04:16:23.427203] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.915 [2024-04-19 04:16:23.427334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.915 [2024-04-19 04:16:23.427358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.915 [2024-04-19 04:16:23.427368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.915 [2024-04-19 04:16:23.427376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.915 [2024-04-19 04:16:23.427398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.916 qpair failed and we were unable to recover it. 00:25:08.916 [2024-04-19 04:16:23.437248] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.916 [2024-04-19 04:16:23.437354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.916 [2024-04-19 04:16:23.437376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.916 [2024-04-19 04:16:23.437386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.916 [2024-04-19 04:16:23.437395] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:08.916 [2024-04-19 04:16:23.437416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:08.916 qpair failed and we were unable to recover it. 00:25:09.173 [2024-04-19 04:16:23.447272] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.173 [2024-04-19 04:16:23.447390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.173 [2024-04-19 04:16:23.447412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.173 [2024-04-19 04:16:23.447422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.173 [2024-04-19 04:16:23.447430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.173 [2024-04-19 04:16:23.447452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.173 qpair failed and we were unable to recover it. 00:25:09.173 [2024-04-19 04:16:23.457226] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.173 [2024-04-19 04:16:23.457357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.173 [2024-04-19 04:16:23.457382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.173 [2024-04-19 04:16:23.457392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.173 [2024-04-19 04:16:23.457400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.173 [2024-04-19 04:16:23.457421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.173 qpair failed and we were unable to recover it. 00:25:09.173 [2024-04-19 04:16:23.467331] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.173 [2024-04-19 04:16:23.467425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.173 [2024-04-19 04:16:23.467445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.173 [2024-04-19 04:16:23.467454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.173 [2024-04-19 04:16:23.467463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.173 [2024-04-19 04:16:23.467483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.173 qpair failed and we were unable to recover it. 00:25:09.173 [2024-04-19 04:16:23.477361] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.173 [2024-04-19 04:16:23.477452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.173 [2024-04-19 04:16:23.477471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.173 [2024-04-19 04:16:23.477481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.173 [2024-04-19 04:16:23.477489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.173 [2024-04-19 04:16:23.477509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.173 qpair failed and we were unable to recover it. 00:25:09.173 [2024-04-19 04:16:23.487357] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.173 [2024-04-19 04:16:23.487440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.173 [2024-04-19 04:16:23.487461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.487470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.487479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.487499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.497412] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.497498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.497518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.497528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.497536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.497562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.507374] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.507474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.507493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.507502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.507510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.507530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.517399] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.517524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.517543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.517553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.517561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.517580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.527492] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.527581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.527600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.527609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.527618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.527637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.537601] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.537715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.537734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.537744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.537753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.537772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.547485] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.547573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.547593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.547602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.547610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.547630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.557505] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.557595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.557615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.557624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.557633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.557653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.567569] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.567668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.567687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.567696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.567705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.567725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.577574] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.577657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.577677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.577686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.577695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.577715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.587624] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.587737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.587757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.587767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.587780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.587800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.597693] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.597792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.597889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.597899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.597907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.597927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.607666] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.607768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.607787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.607796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.607804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.607824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.617683] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.617779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.617799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.617808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.617816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.617836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.627780] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.627861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.627881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.627891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.627898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.627918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.637726] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.637856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.637876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.637885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.637894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.637913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.647786] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.647877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.647896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.647906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.647914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.647934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.657892] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.657999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.658019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.658028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.658036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.658056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.667839] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.667974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.667993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.668002] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.668011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.668030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.677858] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.677941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.677960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.174 [2024-04-19 04:16:23.677973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.174 [2024-04-19 04:16:23.677982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.174 [2024-04-19 04:16:23.678002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.174 qpair failed and we were unable to recover it. 00:25:09.174 [2024-04-19 04:16:23.687930] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.174 [2024-04-19 04:16:23.688055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.174 [2024-04-19 04:16:23.688074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.175 [2024-04-19 04:16:23.688083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.175 [2024-04-19 04:16:23.688091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.175 [2024-04-19 04:16:23.688110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.175 qpair failed and we were unable to recover it. 00:25:09.175 [2024-04-19 04:16:23.698048] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.175 [2024-04-19 04:16:23.698134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.175 [2024-04-19 04:16:23.698156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.175 [2024-04-19 04:16:23.698166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.175 [2024-04-19 04:16:23.698174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.175 [2024-04-19 04:16:23.698196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.175 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.707956] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.708045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.708067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.708076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.708085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.708106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.718057] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.718163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.718183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.718193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.718201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.718221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.728026] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.728113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.728135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.728144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.728153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.728174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.738125] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.738240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.738260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.738269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.738277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.738298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.748092] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.748214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.748234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.748243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.748252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.748272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.758198] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.758287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.758307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.758316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.433 [2024-04-19 04:16:23.758324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.433 [2024-04-19 04:16:23.758351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.433 qpair failed and we were unable to recover it. 00:25:09.433 [2024-04-19 04:16:23.768132] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.433 [2024-04-19 04:16:23.768243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.433 [2024-04-19 04:16:23.768267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.433 [2024-04-19 04:16:23.768276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.768284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.768304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.778291] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.778460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.778480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.778489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.778498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.778518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.788254] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.788341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.788367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.788377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.788385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.788405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.798277] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.798366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.798386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.798395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.798403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.798424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.808269] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.808365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.808385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.808395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.808403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.808423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.818332] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.818426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.818445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.818455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.818463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.818483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.828351] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.828454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.828473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.828483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.828491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.828511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.838452] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.838585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.838606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.838615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.838624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.838644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.848423] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.848508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.848528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.848537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.848545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.848565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.858456] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.858581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.858604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.858614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.858622] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.858642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.868450] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.868531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.868550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.868560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.868568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.868588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.878572] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.878664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.878684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.878693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.878701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.878721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.888557] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.888640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.888660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.888669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.888677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.434 [2024-04-19 04:16:23.888697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.434 qpair failed and we were unable to recover it. 00:25:09.434 [2024-04-19 04:16:23.898529] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.434 [2024-04-19 04:16:23.898616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.434 [2024-04-19 04:16:23.898636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.434 [2024-04-19 04:16:23.898646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.434 [2024-04-19 04:16:23.898654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.898678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.908624] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.908717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.908736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.908745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.908754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.908774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.918560] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.918650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.918670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.918679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.918687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.918707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.928629] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.928716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.928736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.928745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.928754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.928774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.938733] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.938824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.938844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.938853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.938862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.938882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.948788] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.948924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.948947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.948957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.948965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.948984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.435 [2024-04-19 04:16:23.958787] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.435 [2024-04-19 04:16:23.958877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.435 [2024-04-19 04:16:23.958901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.435 [2024-04-19 04:16:23.958912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.435 [2024-04-19 04:16:23.958925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.435 [2024-04-19 04:16:23.958954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.435 qpair failed and we were unable to recover it. 00:25:09.692 [2024-04-19 04:16:23.968798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.692 [2024-04-19 04:16:23.968885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.692 [2024-04-19 04:16:23.968906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.692 [2024-04-19 04:16:23.968916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.692 [2024-04-19 04:16:23.968924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c8000b90 00:25:09.692 [2024-04-19 04:16:23.968946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:09.692 qpair failed and we were unable to recover it. 00:25:09.692 [2024-04-19 04:16:23.978807] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.692 [2024-04-19 04:16:23.978912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.692 [2024-04-19 04:16:23.978931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.692 [2024-04-19 04:16:23.978939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.692 [2024-04-19 04:16:23.978945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:09.692 [2024-04-19 04:16:23.978961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.692 qpair failed and we were unable to recover it. 00:25:09.692 [2024-04-19 04:16:23.988868] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.692 [2024-04-19 04:16:23.988939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.692 [2024-04-19 04:16:23.988954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.692 [2024-04-19 04:16:23.988960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.692 [2024-04-19 04:16:23.988969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f83c0000b90 00:25:09.692 [2024-04-19 04:16:23.988983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.692 qpair failed and we were unable to recover it. 00:25:09.692 [2024-04-19 04:16:23.989083] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:09.692 A controller has encountered a failure and is being reset. 00:25:09.692 [2024-04-19 04:16:23.989172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2cb00 (9): Bad file descriptor 00:25:09.692 Controller properly reset. 00:25:09.692 Initializing NVMe Controllers 00:25:09.692 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:09.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:09.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:09.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:09.692 Initialization complete. Launching workers. 00:25:09.692 Starting thread on core 1 00:25:09.692 Starting thread on core 2 00:25:09.692 Starting thread on core 3 00:25:09.692 Starting thread on core 0 00:25:09.692 04:16:24 -- host/target_disconnect.sh@59 -- # sync 00:25:09.693 00:25:09.693 real 0m10.931s 00:25:09.693 user 0m19.221s 00:25:09.693 sys 0m4.044s 00:25:09.693 04:16:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:09.693 04:16:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.693 ************************************ 00:25:09.693 END TEST nvmf_target_disconnect_tc2 00:25:09.693 ************************************ 00:25:09.693 04:16:24 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:25:09.693 04:16:24 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:09.693 04:16:24 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:25:09.693 04:16:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:09.693 04:16:24 -- nvmf/common.sh@117 -- # sync 00:25:09.693 04:16:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.693 04:16:24 -- nvmf/common.sh@120 -- # set +e 00:25:09.693 04:16:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.693 04:16:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.693 rmmod nvme_tcp 00:25:09.693 rmmod nvme_fabrics 00:25:09.693 rmmod nvme_keyring 00:25:09.693 04:16:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.693 04:16:24 -- nvmf/common.sh@124 -- # set -e 00:25:09.693 04:16:24 -- nvmf/common.sh@125 -- # return 0 00:25:09.693 04:16:24 -- nvmf/common.sh@478 -- # '[' -n 3966693 ']' 00:25:09.693 04:16:24 -- nvmf/common.sh@479 -- # killprocess 3966693 00:25:09.693 04:16:24 -- common/autotest_common.sh@936 -- # '[' -z 3966693 ']' 00:25:09.693 04:16:24 -- common/autotest_common.sh@940 -- # kill -0 3966693 00:25:09.950 04:16:24 -- common/autotest_common.sh@941 -- # uname 00:25:09.950 04:16:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.950 04:16:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3966693 00:25:09.950 04:16:24 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:25:09.950 04:16:24 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:25:09.950 04:16:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3966693' 00:25:09.950 killing process with pid 3966693 00:25:09.950 04:16:24 -- common/autotest_common.sh@955 -- # kill 3966693 00:25:09.950 04:16:24 -- common/autotest_common.sh@960 -- # wait 3966693 00:25:10.208 04:16:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:10.208 04:16:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:10.208 04:16:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:10.208 04:16:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.208 04:16:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.208 04:16:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.208 04:16:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.208 04:16:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.110 04:16:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:12.110 00:25:12.110 real 0m19.739s 00:25:12.110 user 0m47.009s 00:25:12.110 sys 0m8.989s 00:25:12.110 04:16:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:12.110 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.110 ************************************ 00:25:12.110 END TEST nvmf_target_disconnect 00:25:12.110 ************************************ 00:25:12.111 04:16:26 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:25:12.111 04:16:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:12.111 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.368 04:16:26 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:25:12.368 00:25:12.368 real 18m34.077s 00:25:12.368 user 39m53.410s 00:25:12.368 sys 5m56.244s 00:25:12.368 04:16:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:12.368 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.368 ************************************ 00:25:12.368 END TEST nvmf_tcp 00:25:12.368 ************************************ 00:25:12.368 04:16:26 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:25:12.368 04:16:26 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.368 04:16:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:12.368 04:16:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.368 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.368 ************************************ 00:25:12.368 START TEST spdkcli_nvmf_tcp 00:25:12.368 ************************************ 00:25:12.368 04:16:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.626 * Looking for test storage... 00:25:12.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:12.626 04:16:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:12.626 04:16:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.626 04:16:26 -- nvmf/common.sh@7 -- # uname -s 00:25:12.626 04:16:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.626 04:16:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.626 04:16:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.626 04:16:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.626 04:16:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.626 04:16:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.626 04:16:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.626 04:16:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.626 04:16:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.626 04:16:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.626 04:16:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:12.626 04:16:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:12.626 04:16:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.626 04:16:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.626 04:16:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.626 04:16:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.626 04:16:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.626 04:16:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.626 04:16:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.626 04:16:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.626 04:16:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.626 04:16:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.626 04:16:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.626 04:16:26 -- paths/export.sh@5 -- # export PATH 00:25:12.626 04:16:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.626 04:16:26 -- nvmf/common.sh@47 -- # : 0 00:25:12.626 04:16:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.626 04:16:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.626 04:16:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.626 04:16:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.626 04:16:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.626 04:16:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.626 04:16:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.626 04:16:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:12.626 04:16:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:12.626 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.626 04:16:26 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:12.626 04:16:26 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3968567 00:25:12.626 04:16:26 -- spdkcli/common.sh@34 -- # waitforlisten 3968567 00:25:12.626 04:16:26 -- common/autotest_common.sh@817 -- # '[' -z 3968567 ']' 00:25:12.626 04:16:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.626 04:16:26 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:12.626 04:16:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:12.626 04:16:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.626 04:16:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:12.626 04:16:26 -- common/autotest_common.sh@10 -- # set +x 00:25:12.626 [2024-04-19 04:16:26.987427] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:25:12.626 [2024-04-19 04:16:26.987485] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3968567 ] 00:25:12.626 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.626 [2024-04-19 04:16:27.070230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:12.884 [2024-04-19 04:16:27.158875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.884 [2024-04-19 04:16:27.158880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.457 04:16:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:13.457 04:16:27 -- common/autotest_common.sh@850 -- # return 0 00:25:13.457 04:16:27 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:13.457 04:16:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:13.457 04:16:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.457 04:16:27 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:13.457 04:16:27 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:13.457 04:16:27 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:13.457 04:16:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:13.457 04:16:27 -- common/autotest_common.sh@10 -- # set +x 00:25:13.457 04:16:27 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:13.457 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:13.457 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:13.457 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:13.457 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:13.457 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:13.457 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:13.457 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:13.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:13.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.457 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:13.457 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.458 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:13.458 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:13.458 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:13.458 ' 00:25:14.046 [2024-04-19 04:16:28.298165] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:15.946 [2024-04-19 04:16:30.317422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.318 [2024-04-19 04:16:31.489703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:19.217 [2024-04-19 04:16:33.652713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:21.114 [2024-04-19 04:16:35.506944] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:22.486 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:22.486 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:22.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.486 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:22.486 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:22.486 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:22.744 04:16:37 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:22.744 04:16:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:22.744 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 04:16:37 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:22.744 04:16:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:22.744 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.744 04:16:37 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:22.744 04:16:37 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.002 04:16:37 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:23.259 04:16:37 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:23.259 04:16:37 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:23.259 04:16:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:23.259 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:25:23.259 04:16:37 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:23.259 04:16:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:23.259 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:25:23.259 04:16:37 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:23.259 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:23.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:23.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:23.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:23.260 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:23.260 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:23.260 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:23.260 ' 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:28.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:28.525 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:28.525 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:28.525 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:28.525 04:16:42 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:28.525 04:16:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:28.525 04:16:42 -- common/autotest_common.sh@10 -- # set +x 00:25:28.525 04:16:42 -- spdkcli/nvmf.sh@90 -- # killprocess 3968567 00:25:28.525 04:16:42 -- common/autotest_common.sh@936 -- # '[' -z 3968567 ']' 00:25:28.525 04:16:42 -- common/autotest_common.sh@940 -- # kill -0 3968567 00:25:28.525 04:16:42 -- common/autotest_common.sh@941 -- # uname 00:25:28.525 04:16:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.525 04:16:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3968567 00:25:28.525 04:16:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:28.525 04:16:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:28.525 04:16:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3968567' 00:25:28.525 killing process with pid 3968567 00:25:28.525 04:16:42 -- common/autotest_common.sh@955 -- # kill 3968567 00:25:28.525 [2024-04-19 04:16:42.640596] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:28.525 04:16:42 -- common/autotest_common.sh@960 -- # wait 3968567 00:25:28.525 04:16:42 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:28.525 04:16:42 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:28.525 04:16:42 -- spdkcli/common.sh@13 -- # '[' -n 3968567 ']' 00:25:28.525 04:16:42 -- spdkcli/common.sh@14 -- # killprocess 3968567 00:25:28.525 04:16:42 -- common/autotest_common.sh@936 -- # '[' -z 3968567 ']' 00:25:28.525 04:16:42 -- common/autotest_common.sh@940 -- # kill -0 3968567 00:25:28.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3968567) - No such process 00:25:28.525 04:16:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3968567 is not found' 00:25:28.525 Process with pid 3968567 is not found 00:25:28.525 04:16:42 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:28.525 04:16:42 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:28.525 04:16:42 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:28.525 00:25:28.525 real 0m16.043s 00:25:28.525 user 0m33.152s 00:25:28.525 sys 0m0.792s 00:25:28.525 04:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.525 04:16:42 -- common/autotest_common.sh@10 -- # set +x 00:25:28.525 ************************************ 00:25:28.525 END TEST spdkcli_nvmf_tcp 00:25:28.525 ************************************ 00:25:28.525 04:16:42 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:28.525 04:16:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:28.525 04:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.525 04:16:42 -- common/autotest_common.sh@10 -- # set +x 00:25:28.525 ************************************ 00:25:28.525 START TEST nvmf_identify_passthru 00:25:28.525 ************************************ 00:25:28.525 04:16:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:28.783 * Looking for test storage... 00:25:28.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.783 04:16:43 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.783 04:16:43 -- nvmf/common.sh@7 -- # uname -s 00:25:28.783 04:16:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.783 04:16:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.784 04:16:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.784 04:16:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.784 04:16:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.784 04:16:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.784 04:16:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.784 04:16:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.784 04:16:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.784 04:16:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.784 04:16:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:28.784 04:16:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:28.784 04:16:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.784 04:16:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.784 04:16:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.784 04:16:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.784 04:16:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.784 04:16:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.784 04:16:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.784 04:16:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.784 04:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@5 -- # export PATH 00:25:28.784 04:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- nvmf/common.sh@47 -- # : 0 00:25:28.784 04:16:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.784 04:16:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.784 04:16:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.784 04:16:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.784 04:16:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.784 04:16:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.784 04:16:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.784 04:16:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.784 04:16:43 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.784 04:16:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.784 04:16:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.784 04:16:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.784 04:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- paths/export.sh@5 -- # export PATH 00:25:28.784 04:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.784 04:16:43 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:28.784 04:16:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:28.784 04:16:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.784 04:16:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:28.784 04:16:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:28.784 04:16:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:28.784 04:16:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.784 04:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:28.784 04:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.784 04:16:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:28.784 04:16:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:28.784 04:16:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.784 04:16:43 -- common/autotest_common.sh@10 -- # set +x 00:25:34.043 04:16:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:34.043 04:16:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.043 04:16:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.043 04:16:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.043 04:16:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.043 04:16:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.043 04:16:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.043 04:16:48 -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.043 04:16:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.043 04:16:48 -- nvmf/common.sh@296 -- # e810=() 00:25:34.043 04:16:48 -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.300 04:16:48 -- nvmf/common.sh@297 -- # x722=() 00:25:34.300 04:16:48 -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.300 04:16:48 -- nvmf/common.sh@298 -- # mlx=() 00:25:34.300 04:16:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.300 04:16:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.300 04:16:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.300 04:16:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.300 04:16:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.300 04:16:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.300 04:16:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:34.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:34.300 04:16:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.300 04:16:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:34.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:34.300 04:16:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.300 04:16:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.301 04:16:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.301 04:16:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.301 04:16:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:34.301 04:16:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.301 04:16:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:34.301 Found net devices under 0000:af:00.0: cvl_0_0 00:25:34.301 04:16:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.301 04:16:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.301 04:16:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.301 04:16:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:34.301 04:16:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.301 04:16:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:34.301 Found net devices under 0000:af:00.1: cvl_0_1 00:25:34.301 04:16:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.301 04:16:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:34.301 04:16:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:34.301 04:16:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:34.301 04:16:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:34.301 04:16:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.301 04:16:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.301 04:16:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.301 04:16:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.301 04:16:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.301 04:16:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.301 04:16:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.301 04:16:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.301 04:16:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.301 04:16:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.301 04:16:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.301 04:16:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.301 04:16:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.301 04:16:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.301 04:16:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.301 04:16:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.301 04:16:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.301 04:16:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.557 04:16:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.557 04:16:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:25:34.557 00:25:34.557 --- 10.0.0.2 ping statistics --- 00:25:34.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.557 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:34.557 04:16:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:25:34.557 00:25:34.557 --- 10.0.0.1 ping statistics --- 00:25:34.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.557 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:25:34.557 04:16:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.557 04:16:48 -- nvmf/common.sh@411 -- # return 0 00:25:34.557 04:16:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:34.557 04:16:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.557 04:16:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:34.557 04:16:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:34.557 04:16:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.557 04:16:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:34.557 04:16:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:34.557 04:16:48 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:34.557 04:16:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:34.557 04:16:48 -- common/autotest_common.sh@10 -- # set +x 00:25:34.557 04:16:48 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:34.557 04:16:48 -- common/autotest_common.sh@1510 -- # bdfs=() 00:25:34.557 04:16:48 -- common/autotest_common.sh@1510 -- # local bdfs 00:25:34.557 04:16:48 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:25:34.557 04:16:48 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:25:34.557 04:16:48 -- common/autotest_common.sh@1499 -- # bdfs=() 00:25:34.557 04:16:48 -- common/autotest_common.sh@1499 -- # local bdfs 00:25:34.557 04:16:48 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:34.557 04:16:48 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:34.557 04:16:48 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:25:34.557 04:16:49 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:25:34.557 04:16:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:86:00.0 00:25:34.557 04:16:49 -- common/autotest_common.sh@1513 -- # echo 0000:86:00.0 00:25:34.557 04:16:49 -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:25:34.557 04:16:49 -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:25:34.557 04:16:49 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:25:34.557 04:16:49 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:34.557 04:16:49 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.746 04:16:53 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:25:38.746 04:16:53 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:25:38.746 04:16:53 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:38.746 04:16:53 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:39.005 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.190 04:16:57 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:43.190 04:16:57 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:43.190 04:16:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:43.190 04:16:57 -- common/autotest_common.sh@10 -- # set +x 00:25:43.190 04:16:57 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:43.190 04:16:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:43.190 04:16:57 -- common/autotest_common.sh@10 -- # set +x 00:25:43.190 04:16:57 -- target/identify_passthru.sh@31 -- # nvmfpid=3976029 00:25:43.190 04:16:57 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:43.190 04:16:57 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.190 04:16:57 -- target/identify_passthru.sh@35 -- # waitforlisten 3976029 00:25:43.190 04:16:57 -- common/autotest_common.sh@817 -- # '[' -z 3976029 ']' 00:25:43.190 04:16:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.190 04:16:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:43.190 04:16:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.190 04:16:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:43.190 04:16:57 -- common/autotest_common.sh@10 -- # set +x 00:25:43.190 [2024-04-19 04:16:57.584081] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:25:43.190 [2024-04-19 04:16:57.584142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.190 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.190 [2024-04-19 04:16:57.670942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.449 [2024-04-19 04:16:57.759593] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.449 [2024-04-19 04:16:57.759636] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.449 [2024-04-19 04:16:57.759646] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.449 [2024-04-19 04:16:57.759655] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.449 [2024-04-19 04:16:57.759662] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.449 [2024-04-19 04:16:57.759712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.449 [2024-04-19 04:16:57.759794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.449 [2024-04-19 04:16:57.759931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.449 [2024-04-19 04:16:57.759932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.016 04:16:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:44.016 04:16:58 -- common/autotest_common.sh@850 -- # return 0 00:25:44.016 04:16:58 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:44.016 04:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.016 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:25:44.016 INFO: Log level set to 20 00:25:44.016 INFO: Requests: 00:25:44.016 { 00:25:44.016 "jsonrpc": "2.0", 00:25:44.016 "method": "nvmf_set_config", 00:25:44.016 "id": 1, 00:25:44.016 "params": { 00:25:44.016 "admin_cmd_passthru": { 00:25:44.016 "identify_ctrlr": true 00:25:44.016 } 00:25:44.016 } 00:25:44.016 } 00:25:44.016 00:25:44.016 INFO: response: 00:25:44.016 { 00:25:44.016 "jsonrpc": "2.0", 00:25:44.016 "id": 1, 00:25:44.016 "result": true 00:25:44.016 } 00:25:44.016 00:25:44.016 04:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.016 04:16:58 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:44.016 04:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.016 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:25:44.016 INFO: Setting log level to 20 00:25:44.016 INFO: Setting log level to 20 00:25:44.016 INFO: Log level set to 20 00:25:44.016 INFO: Log level set to 20 00:25:44.016 INFO: Requests: 00:25:44.016 { 00:25:44.016 "jsonrpc": "2.0", 00:25:44.016 "method": "framework_start_init", 00:25:44.016 "id": 1 00:25:44.016 } 00:25:44.016 00:25:44.016 INFO: Requests: 00:25:44.016 { 00:25:44.016 "jsonrpc": "2.0", 00:25:44.016 "method": "framework_start_init", 00:25:44.016 "id": 1 00:25:44.016 } 00:25:44.016 00:25:44.275 [2024-04-19 04:16:58.618369] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:44.275 INFO: response: 00:25:44.275 { 00:25:44.275 "jsonrpc": "2.0", 00:25:44.275 "id": 1, 00:25:44.275 "result": true 00:25:44.275 } 00:25:44.275 00:25:44.275 INFO: response: 00:25:44.275 { 00:25:44.275 "jsonrpc": "2.0", 00:25:44.275 "id": 1, 00:25:44.275 "result": true 00:25:44.275 } 00:25:44.275 00:25:44.275 04:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.275 04:16:58 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.275 04:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.275 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:25:44.275 INFO: Setting log level to 40 00:25:44.275 INFO: Setting log level to 40 00:25:44.275 INFO: Setting log level to 40 00:25:44.275 [2024-04-19 04:16:58.632312] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.275 04:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.275 04:16:58 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:44.275 04:16:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:44.275 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:25:44.275 04:16:58 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:25:44.275 04:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.275 04:16:58 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 Nvme0n1 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:47.591 04:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.591 04:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:47.591 04:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.591 04:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.591 04:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.591 04:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 [2024-04-19 04:17:01.557582] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:47.591 04:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.591 04:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 [2024-04-19 04:17:01.565340] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:47.591 [ 00:25:47.591 { 00:25:47.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:47.591 "subtype": "Discovery", 00:25:47.591 "listen_addresses": [], 00:25:47.591 "allow_any_host": true, 00:25:47.591 "hosts": [] 00:25:47.591 }, 00:25:47.591 { 00:25:47.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:47.591 "subtype": "NVMe", 00:25:47.591 "listen_addresses": [ 00:25:47.591 { 00:25:47.591 "transport": "TCP", 00:25:47.591 "trtype": "TCP", 00:25:47.591 "adrfam": "IPv4", 00:25:47.591 "traddr": "10.0.0.2", 00:25:47.591 "trsvcid": "4420" 00:25:47.591 } 00:25:47.591 ], 00:25:47.591 "allow_any_host": true, 00:25:47.591 "hosts": [], 00:25:47.591 "serial_number": "SPDK00000000000001", 00:25:47.591 "model_number": "SPDK bdev Controller", 00:25:47.591 "max_namespaces": 1, 00:25:47.591 "min_cntlid": 1, 00:25:47.591 "max_cntlid": 65519, 00:25:47.591 "namespaces": [ 00:25:47.591 { 00:25:47.591 "nsid": 1, 00:25:47.591 "bdev_name": "Nvme0n1", 00:25:47.591 "name": "Nvme0n1", 00:25:47.591 "nguid": "CBF14DD65793472F8CF99E1442F18584", 00:25:47.591 "uuid": "cbf14dd6-5793-472f-8cf9-9e1442f18584" 00:25:47.591 } 00:25:47.591 ] 00:25:47.591 } 00:25:47.591 ] 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:47.591 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.591 04:17:01 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:25:47.591 04:17:01 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:47.591 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.591 04:17:01 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:47.591 04:17:01 -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:47.591 04:17:01 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.591 04:17:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.591 04:17:01 -- common/autotest_common.sh@10 -- # set +x 00:25:47.591 04:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.591 04:17:01 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:47.591 04:17:01 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:47.591 04:17:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:47.591 04:17:01 -- nvmf/common.sh@117 -- # sync 00:25:47.591 04:17:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:47.591 04:17:01 -- nvmf/common.sh@120 -- # set +e 00:25:47.591 04:17:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:47.591 04:17:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:47.591 rmmod nvme_tcp 00:25:47.591 rmmod nvme_fabrics 00:25:47.591 rmmod nvme_keyring 00:25:47.591 04:17:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:47.591 04:17:02 -- nvmf/common.sh@124 -- # set -e 00:25:47.591 04:17:02 -- nvmf/common.sh@125 -- # return 0 00:25:47.591 04:17:02 -- nvmf/common.sh@478 -- # '[' -n 3976029 ']' 00:25:47.591 04:17:02 -- nvmf/common.sh@479 -- # killprocess 3976029 00:25:47.591 04:17:02 -- common/autotest_common.sh@936 -- # '[' -z 3976029 ']' 00:25:47.591 04:17:02 -- common/autotest_common.sh@940 -- # kill -0 3976029 00:25:47.591 04:17:02 -- common/autotest_common.sh@941 -- # uname 00:25:47.591 04:17:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:47.591 04:17:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3976029 00:25:47.591 04:17:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:47.591 04:17:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:47.591 04:17:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3976029' 00:25:47.591 killing process with pid 3976029 00:25:47.591 04:17:02 -- common/autotest_common.sh@955 -- # kill 3976029 00:25:47.591 [2024-04-19 04:17:02.079632] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:47.591 04:17:02 -- common/autotest_common.sh@960 -- # wait 3976029 00:25:49.494 04:17:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:49.494 04:17:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:49.494 04:17:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:49.494 04:17:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.494 04:17:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.494 04:17:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.494 04:17:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:49.494 04:17:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.397 04:17:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.397 00:25:51.397 real 0m22.703s 00:25:51.397 user 0m31.192s 00:25:51.397 sys 0m5.250s 00:25:51.397 04:17:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:51.397 04:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.397 ************************************ 00:25:51.397 END TEST nvmf_identify_passthru 00:25:51.397 ************************************ 00:25:51.397 04:17:05 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:51.397 04:17:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:51.397 04:17:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:51.397 04:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.397 ************************************ 00:25:51.397 START TEST nvmf_dif 00:25:51.397 ************************************ 00:25:51.397 04:17:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:51.657 * Looking for test storage... 00:25:51.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.657 04:17:05 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.657 04:17:05 -- nvmf/common.sh@7 -- # uname -s 00:25:51.657 04:17:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.657 04:17:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.657 04:17:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.657 04:17:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.657 04:17:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.657 04:17:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.657 04:17:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.657 04:17:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.657 04:17:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.657 04:17:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.657 04:17:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:51.657 04:17:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:51.657 04:17:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.657 04:17:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.657 04:17:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.657 04:17:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.657 04:17:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.657 04:17:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.657 04:17:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.657 04:17:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.657 04:17:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.657 04:17:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.657 04:17:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.657 04:17:05 -- paths/export.sh@5 -- # export PATH 00:25:51.657 04:17:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.657 04:17:05 -- nvmf/common.sh@47 -- # : 0 00:25:51.657 04:17:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.657 04:17:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.657 04:17:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.657 04:17:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.657 04:17:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.657 04:17:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.657 04:17:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.657 04:17:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.657 04:17:05 -- target/dif.sh@15 -- # NULL_META=16 00:25:51.657 04:17:05 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:51.657 04:17:05 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:51.657 04:17:05 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:51.657 04:17:05 -- target/dif.sh@135 -- # nvmftestinit 00:25:51.657 04:17:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:51.657 04:17:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.657 04:17:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:51.657 04:17:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:51.657 04:17:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:51.657 04:17:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.657 04:17:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:51.657 04:17:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.657 04:17:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:51.657 04:17:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:51.657 04:17:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.657 04:17:05 -- common/autotest_common.sh@10 -- # set +x 00:25:56.930 04:17:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:56.931 04:17:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.931 04:17:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.931 04:17:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.931 04:17:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.931 04:17:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.931 04:17:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.931 04:17:11 -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.931 04:17:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.931 04:17:11 -- nvmf/common.sh@296 -- # e810=() 00:25:56.931 04:17:11 -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.931 04:17:11 -- nvmf/common.sh@297 -- # x722=() 00:25:56.931 04:17:11 -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.931 04:17:11 -- nvmf/common.sh@298 -- # mlx=() 00:25:56.931 04:17:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.931 04:17:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.931 04:17:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.931 04:17:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.931 04:17:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.931 04:17:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:56.931 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:56.931 04:17:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.931 04:17:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:56.931 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:56.931 04:17:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.931 04:17:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.931 04:17:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.931 04:17:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:56.931 Found net devices under 0000:af:00.0: cvl_0_0 00:25:56.931 04:17:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.931 04:17:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.931 04:17:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.931 04:17:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.931 04:17:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:56.931 Found net devices under 0000:af:00.1: cvl_0_1 00:25:56.931 04:17:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.931 04:17:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:56.931 04:17:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:56.931 04:17:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:56.931 04:17:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.931 04:17:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.931 04:17:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.931 04:17:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.931 04:17:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.931 04:17:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.931 04:17:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.931 04:17:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.931 04:17:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.931 04:17:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.931 04:17:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.931 04:17:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.931 04:17:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.189 04:17:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.189 04:17:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.189 04:17:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.189 04:17:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.189 04:17:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.189 04:17:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.189 04:17:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:25:57.189 00:25:57.189 --- 10.0.0.2 ping statistics --- 00:25:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.189 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:25:57.189 04:17:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:25:57.189 00:25:57.189 --- 10.0.0.1 ping statistics --- 00:25:57.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.189 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:25:57.190 04:17:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.190 04:17:11 -- nvmf/common.sh@411 -- # return 0 00:25:57.190 04:17:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:25:57.190 04:17:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:59.728 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:59.728 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:59.728 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:59.987 04:17:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.987 04:17:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:59.987 04:17:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:59.987 04:17:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.987 04:17:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:59.987 04:17:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:59.987 04:17:14 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:59.987 04:17:14 -- target/dif.sh@137 -- # nvmfappstart 00:25:59.987 04:17:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:59.987 04:17:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:59.987 04:17:14 -- common/autotest_common.sh@10 -- # set +x 00:25:59.987 04:17:14 -- nvmf/common.sh@470 -- # nvmfpid=3981864 00:25:59.987 04:17:14 -- nvmf/common.sh@471 -- # waitforlisten 3981864 00:25:59.987 04:17:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:59.987 04:17:14 -- common/autotest_common.sh@817 -- # '[' -z 3981864 ']' 00:25:59.987 04:17:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.987 04:17:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.987 04:17:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.987 04:17:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.987 04:17:14 -- common/autotest_common.sh@10 -- # set +x 00:25:59.987 [2024-04-19 04:17:14.453268] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:25:59.987 [2024-04-19 04:17:14.453321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.987 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.246 [2024-04-19 04:17:14.540841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.246 [2024-04-19 04:17:14.629330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.246 [2024-04-19 04:17:14.629378] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.246 [2024-04-19 04:17:14.629389] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.246 [2024-04-19 04:17:14.629402] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.246 [2024-04-19 04:17:14.629410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.246 [2024-04-19 04:17:14.629431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.182 04:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:01.182 04:17:15 -- common/autotest_common.sh@850 -- # return 0 00:26:01.182 04:17:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:01.182 04:17:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 04:17:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.182 04:17:15 -- target/dif.sh@139 -- # create_transport 00:26:01.182 04:17:15 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:01.182 04:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 [2024-04-19 04:17:15.430047] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.182 04:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.182 04:17:15 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:01.182 04:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:01.182 04:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 ************************************ 00:26:01.182 START TEST fio_dif_1_default 00:26:01.182 ************************************ 00:26:01.182 04:17:15 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:26:01.182 04:17:15 -- target/dif.sh@86 -- # create_subsystems 0 00:26:01.182 04:17:15 -- target/dif.sh@28 -- # local sub 00:26:01.182 04:17:15 -- target/dif.sh@30 -- # for sub in "$@" 00:26:01.182 04:17:15 -- target/dif.sh@31 -- # create_subsystem 0 00:26:01.182 04:17:15 -- target/dif.sh@18 -- # local sub_id=0 00:26:01.182 04:17:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:01.182 04:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 bdev_null0 00:26:01.182 04:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.182 04:17:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:01.182 04:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 04:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.182 04:17:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:01.182 04:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 04:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.182 04:17:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.182 04:17:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.182 04:17:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.182 [2024-04-19 04:17:15.602638] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.182 04:17:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.182 04:17:15 -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:01.182 04:17:15 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:01.182 04:17:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:01.182 04:17:15 -- nvmf/common.sh@521 -- # config=() 00:26:01.182 04:17:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.182 04:17:15 -- nvmf/common.sh@521 -- # local subsystem config 00:26:01.182 04:17:15 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.182 04:17:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:01.182 04:17:15 -- target/dif.sh@82 -- # gen_fio_conf 00:26:01.182 04:17:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:01.182 { 00:26:01.182 "params": { 00:26:01.182 "name": "Nvme$subsystem", 00:26:01.182 "trtype": "$TEST_TRANSPORT", 00:26:01.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.182 "adrfam": "ipv4", 00:26:01.182 "trsvcid": "$NVMF_PORT", 00:26:01.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.182 "hdgst": ${hdgst:-false}, 00:26:01.182 "ddgst": ${ddgst:-false} 00:26:01.182 }, 00:26:01.182 "method": "bdev_nvme_attach_controller" 00:26:01.182 } 00:26:01.182 EOF 00:26:01.182 )") 00:26:01.182 04:17:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:01.182 04:17:15 -- target/dif.sh@54 -- # local file 00:26:01.182 04:17:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.182 04:17:15 -- target/dif.sh@56 -- # cat 00:26:01.182 04:17:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:01.182 04:17:15 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.182 04:17:15 -- common/autotest_common.sh@1327 -- # shift 00:26:01.182 04:17:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:01.182 04:17:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.182 04:17:15 -- nvmf/common.sh@543 -- # cat 00:26:01.182 04:17:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.182 04:17:15 -- target/dif.sh@72 -- # (( file <= files )) 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:01.182 04:17:15 -- nvmf/common.sh@545 -- # jq . 00:26:01.182 04:17:15 -- nvmf/common.sh@546 -- # IFS=, 00:26:01.182 04:17:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:01.182 "params": { 00:26:01.182 "name": "Nvme0", 00:26:01.182 "trtype": "tcp", 00:26:01.182 "traddr": "10.0.0.2", 00:26:01.182 "adrfam": "ipv4", 00:26:01.182 "trsvcid": "4420", 00:26:01.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:01.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:01.182 "hdgst": false, 00:26:01.182 "ddgst": false 00:26:01.182 }, 00:26:01.182 "method": "bdev_nvme_attach_controller" 00:26:01.182 }' 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:01.182 04:17:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:01.182 04:17:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:01.182 04:17:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:01.182 04:17:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:01.182 04:17:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:01.182 04:17:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:01.768 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:01.768 fio-3.35 00:26:01.768 Starting 1 thread 00:26:01.768 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.974 00:26:13.974 filename0: (groupid=0, jobs=1): err= 0: pid=3982545: Fri Apr 19 04:17:26 2024 00:26:13.974 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10019msec) 00:26:13.974 slat (nsec): min=8921, max=44924, avg=9463.29, stdev=1673.03 00:26:13.974 clat (usec): min=40804, max=45320, avg=41371.94, stdev=541.32 00:26:13.974 lat (usec): min=40813, max=45345, avg=41381.41, stdev=541.45 00:26:13.974 clat percentiles (usec): 00:26:13.974 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:13.974 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:13.974 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:13.974 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:26:13.974 | 99.99th=[45351] 00:26:13.974 bw ( KiB/s): min= 384, max= 416, per=99.62%, avg=385.60, stdev= 7.16, samples=20 00:26:13.974 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:26:13.974 lat (msec) : 50=100.00% 00:26:13.974 cpu : usr=94.79%, sys=4.90%, ctx=15, majf=0, minf=176 00:26:13.974 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.974 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.974 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.974 00:26:13.974 Run status group 0 (all jobs): 00:26:13.974 READ: bw=386KiB/s (396kB/s), 386KiB/s-386KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10019-10019msec 00:26:13.974 04:17:26 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:13.974 04:17:26 -- target/dif.sh@43 -- # local sub 00:26:13.974 04:17:26 -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.974 04:17:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.974 04:17:26 -- target/dif.sh@36 -- # local sub_id=0 00:26:13.974 04:17:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.974 04:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.974 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 04:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.974 04:17:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.974 04:17:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.974 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 04:17:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.974 00:26:13.974 real 0m11.404s 00:26:13.974 user 0m21.069s 00:26:13.974 sys 0m0.863s 00:26:13.974 04:17:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:13.974 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 ************************************ 00:26:13.974 END TEST fio_dif_1_default 00:26:13.974 ************************************ 00:26:13.974 04:17:27 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:13.974 04:17:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:13.974 04:17:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.974 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 ************************************ 00:26:13.974 START TEST fio_dif_1_multi_subsystems 00:26:13.974 ************************************ 00:26:13.974 04:17:27 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:26:13.974 04:17:27 -- target/dif.sh@92 -- # local files=1 00:26:13.974 04:17:27 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:13.974 04:17:27 -- target/dif.sh@28 -- # local sub 00:26:13.974 04:17:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.974 04:17:27 -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.974 04:17:27 -- target/dif.sh@18 -- # local sub_id=0 00:26:13.974 04:17:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:13.974 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.974 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 bdev_null0 00:26:13.974 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.974 04:17:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.974 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.974 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.974 04:17:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.974 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.974 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.974 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.974 04:17:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.975 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.975 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.975 [2024-04-19 04:17:27.160954] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.975 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.975 04:17:27 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.975 04:17:27 -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.975 04:17:27 -- target/dif.sh@18 -- # local sub_id=1 00:26:13.975 04:17:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:13.975 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.975 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.975 bdev_null1 00:26:13.975 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.975 04:17:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.975 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.975 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.975 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.975 04:17:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.975 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.975 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.975 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.975 04:17:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.975 04:17:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.975 04:17:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.975 04:17:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.975 04:17:27 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:13.975 04:17:27 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:13.975 04:17:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:13.975 04:17:27 -- nvmf/common.sh@521 -- # config=() 00:26:13.975 04:17:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.975 04:17:27 -- nvmf/common.sh@521 -- # local subsystem config 00:26:13.975 04:17:27 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.975 04:17:27 -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.975 04:17:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:13.975 04:17:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:13.975 04:17:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:13.975 { 00:26:13.975 "params": { 00:26:13.975 "name": "Nvme$subsystem", 00:26:13.975 "trtype": "$TEST_TRANSPORT", 00:26:13.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.975 "adrfam": "ipv4", 00:26:13.975 "trsvcid": "$NVMF_PORT", 00:26:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.975 "hdgst": ${hdgst:-false}, 00:26:13.975 "ddgst": ${ddgst:-false} 00:26:13.975 }, 00:26:13.975 "method": "bdev_nvme_attach_controller" 00:26:13.975 } 00:26:13.975 EOF 00:26:13.975 )") 00:26:13.975 04:17:27 -- target/dif.sh@54 -- # local file 00:26:13.975 04:17:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.975 04:17:27 -- target/dif.sh@56 -- # cat 00:26:13.975 04:17:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:13.975 04:17:27 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.975 04:17:27 -- common/autotest_common.sh@1327 -- # shift 00:26:13.975 04:17:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:13.975 04:17:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.975 04:17:27 -- nvmf/common.sh@543 -- # cat 00:26:13.975 04:17:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.975 04:17:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.975 04:17:27 -- target/dif.sh@73 -- # cat 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:13.975 04:17:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:13.975 04:17:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:13.975 { 00:26:13.975 "params": { 00:26:13.975 "name": "Nvme$subsystem", 00:26:13.975 "trtype": "$TEST_TRANSPORT", 00:26:13.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.975 "adrfam": "ipv4", 00:26:13.975 "trsvcid": "$NVMF_PORT", 00:26:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.975 "hdgst": ${hdgst:-false}, 00:26:13.975 "ddgst": ${ddgst:-false} 00:26:13.975 }, 00:26:13.975 "method": "bdev_nvme_attach_controller" 00:26:13.975 } 00:26:13.975 EOF 00:26:13.975 )") 00:26:13.975 04:17:27 -- target/dif.sh@72 -- # (( file++ )) 00:26:13.975 04:17:27 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.975 04:17:27 -- nvmf/common.sh@543 -- # cat 00:26:13.975 04:17:27 -- nvmf/common.sh@545 -- # jq . 00:26:13.975 04:17:27 -- nvmf/common.sh@546 -- # IFS=, 00:26:13.975 04:17:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:13.975 "params": { 00:26:13.975 "name": "Nvme0", 00:26:13.975 "trtype": "tcp", 00:26:13.975 "traddr": "10.0.0.2", 00:26:13.975 "adrfam": "ipv4", 00:26:13.975 "trsvcid": "4420", 00:26:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.975 "hdgst": false, 00:26:13.975 "ddgst": false 00:26:13.975 }, 00:26:13.975 "method": "bdev_nvme_attach_controller" 00:26:13.975 },{ 00:26:13.975 "params": { 00:26:13.975 "name": "Nvme1", 00:26:13.975 "trtype": "tcp", 00:26:13.975 "traddr": "10.0.0.2", 00:26:13.975 "adrfam": "ipv4", 00:26:13.975 "trsvcid": "4420", 00:26:13.975 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.975 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.975 "hdgst": false, 00:26:13.975 "ddgst": false 00:26:13.975 }, 00:26:13.975 "method": "bdev_nvme_attach_controller" 00:26:13.975 }' 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:13.975 04:17:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:13.975 04:17:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:13.975 04:17:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:13.975 04:17:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:13.975 04:17:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:13.975 04:17:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.975 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.975 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:13.975 fio-3.35 00:26:13.975 Starting 2 threads 00:26:13.975 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.921 00:26:23.921 filename0: (groupid=0, jobs=1): err= 0: pid=3984795: Fri Apr 19 04:17:38 2024 00:26:23.921 read: IOPS=189, BW=759KiB/s (778kB/s)(7600KiB/10009msec) 00:26:23.921 slat (nsec): min=9074, max=24900, avg=10203.99, stdev=2035.06 00:26:23.921 clat (usec): min=703, max=42262, avg=21042.76, stdev=20255.24 00:26:23.921 lat (usec): min=712, max=42272, avg=21052.96, stdev=20254.58 00:26:23.921 clat percentiles (usec): 00:26:23.921 | 1.00th=[ 709], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 734], 00:26:23.921 | 30.00th=[ 742], 40.00th=[ 750], 50.00th=[41157], 60.00th=[41157], 00:26:23.921 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.921 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:23.921 | 99.99th=[42206] 00:26:23.921 bw ( KiB/s): min= 704, max= 768, per=49.97%, avg=758.40, stdev=18.28, samples=20 00:26:23.921 iops : min= 176, max= 192, avg=189.60, stdev= 4.57, samples=20 00:26:23.921 lat (usec) : 750=38.53%, 1000=10.58% 00:26:23.921 lat (msec) : 2=0.79%, 50=50.11% 00:26:23.921 cpu : usr=97.42%, sys=2.30%, ctx=9, majf=0, minf=34 00:26:23.921 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.921 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.921 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.921 filename1: (groupid=0, jobs=1): err= 0: pid=3984796: Fri Apr 19 04:17:38 2024 00:26:23.921 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10007msec) 00:26:23.921 slat (nsec): min=9036, max=33371, avg=10226.72, stdev=2140.53 00:26:23.921 clat (usec): min=662, max=42241, avg=21081.21, stdev=20245.82 00:26:23.921 lat (usec): min=672, max=42251, avg=21091.44, stdev=20245.20 00:26:23.921 clat percentiles (usec): 00:26:23.921 | 1.00th=[ 709], 5.00th=[ 717], 10.00th=[ 725], 20.00th=[ 734], 00:26:23.921 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[41157], 60.00th=[41157], 00:26:23.921 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:23.921 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:23.921 | 99.99th=[42206] 00:26:23.921 bw ( KiB/s): min= 672, max= 768, per=49.83%, avg=756.80, stdev=28.00, samples=20 00:26:23.921 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:26:23.921 lat (usec) : 750=33.60%, 1000=16.19% 00:26:23.921 lat (msec) : 50=50.21% 00:26:23.921 cpu : usr=97.21%, sys=2.50%, ctx=12, majf=0, minf=160 00:26:23.921 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.921 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.922 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:23.922 00:26:23.922 Run status group 0 (all jobs): 00:26:23.922 READ: bw=1517KiB/s (1553kB/s), 758KiB/s-759KiB/s (776kB/s-778kB/s), io=14.8MiB (15.5MB), run=10007-10009msec 00:26:24.178 04:17:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:24.178 04:17:38 -- target/dif.sh@43 -- # local sub 00:26:24.178 04:17:38 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.178 04:17:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:24.178 04:17:38 -- target/dif.sh@36 -- # local sub_id=0 00:26:24.179 04:17:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:24.179 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.179 04:17:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:24.179 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.179 04:17:38 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.179 04:17:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:24.179 04:17:38 -- target/dif.sh@36 -- # local sub_id=1 00:26:24.179 04:17:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.179 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.179 04:17:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:24.179 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.179 00:26:24.179 real 0m11.472s 00:26:24.179 user 0m31.158s 00:26:24.179 sys 0m0.834s 00:26:24.179 04:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.179 ************************************ 00:26:24.179 END TEST fio_dif_1_multi_subsystems 00:26:24.179 ************************************ 00:26:24.179 04:17:38 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:24.179 04:17:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:24.179 04:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.179 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.436 ************************************ 00:26:24.436 START TEST fio_dif_rand_params 00:26:24.436 ************************************ 00:26:24.436 04:17:38 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:26:24.436 04:17:38 -- target/dif.sh@100 -- # local NULL_DIF 00:26:24.436 04:17:38 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:24.436 04:17:38 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:24.436 04:17:38 -- target/dif.sh@103 -- # bs=128k 00:26:24.436 04:17:38 -- target/dif.sh@103 -- # numjobs=3 00:26:24.436 04:17:38 -- target/dif.sh@103 -- # iodepth=3 00:26:24.436 04:17:38 -- target/dif.sh@103 -- # runtime=5 00:26:24.436 04:17:38 -- target/dif.sh@105 -- # create_subsystems 0 00:26:24.436 04:17:38 -- target/dif.sh@28 -- # local sub 00:26:24.436 04:17:38 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.436 04:17:38 -- target/dif.sh@31 -- # create_subsystem 0 00:26:24.436 04:17:38 -- target/dif.sh@18 -- # local sub_id=0 00:26:24.436 04:17:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:24.436 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.436 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.436 bdev_null0 00:26:24.436 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.436 04:17:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.436 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.436 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.436 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.436 04:17:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.436 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.436 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.436 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.436 04:17:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.436 04:17:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.436 04:17:38 -- common/autotest_common.sh@10 -- # set +x 00:26:24.436 [2024-04-19 04:17:38.795792] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.436 04:17:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.436 04:17:38 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:24.436 04:17:38 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:24.436 04:17:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:24.436 04:17:38 -- nvmf/common.sh@521 -- # config=() 00:26:24.436 04:17:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.436 04:17:38 -- nvmf/common.sh@521 -- # local subsystem config 00:26:24.436 04:17:38 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.436 04:17:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:24.436 04:17:38 -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.436 04:17:38 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:24.436 04:17:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:24.436 { 00:26:24.436 "params": { 00:26:24.436 "name": "Nvme$subsystem", 00:26:24.436 "trtype": "$TEST_TRANSPORT", 00:26:24.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.436 "adrfam": "ipv4", 00:26:24.436 "trsvcid": "$NVMF_PORT", 00:26:24.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.436 "hdgst": ${hdgst:-false}, 00:26:24.436 "ddgst": ${ddgst:-false} 00:26:24.436 }, 00:26:24.436 "method": "bdev_nvme_attach_controller" 00:26:24.436 } 00:26:24.436 EOF 00:26:24.436 )") 00:26:24.436 04:17:38 -- target/dif.sh@54 -- # local file 00:26:24.436 04:17:38 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.436 04:17:38 -- target/dif.sh@56 -- # cat 00:26:24.436 04:17:38 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:24.436 04:17:38 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.436 04:17:38 -- common/autotest_common.sh@1327 -- # shift 00:26:24.436 04:17:38 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:24.436 04:17:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.436 04:17:38 -- nvmf/common.sh@543 -- # cat 00:26:24.436 04:17:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.436 04:17:38 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:24.436 04:17:38 -- nvmf/common.sh@545 -- # jq . 00:26:24.436 04:17:38 -- nvmf/common.sh@546 -- # IFS=, 00:26:24.436 04:17:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:24.436 "params": { 00:26:24.436 "name": "Nvme0", 00:26:24.436 "trtype": "tcp", 00:26:24.436 "traddr": "10.0.0.2", 00:26:24.436 "adrfam": "ipv4", 00:26:24.436 "trsvcid": "4420", 00:26:24.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.436 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.436 "hdgst": false, 00:26:24.436 "ddgst": false 00:26:24.436 }, 00:26:24.436 "method": "bdev_nvme_attach_controller" 00:26:24.436 }' 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:24.436 04:17:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:24.436 04:17:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:24.436 04:17:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:24.436 04:17:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:24.436 04:17:38 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:24.436 04:17:38 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.999 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:24.999 ... 00:26:24.999 fio-3.35 00:26:24.999 Starting 3 threads 00:26:24.999 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.548 00:26:31.548 filename0: (groupid=0, jobs=1): err= 0: pid=3986789: Fri Apr 19 04:17:44 2024 00:26:31.548 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(128MiB/5006msec) 00:26:31.548 slat (nsec): min=9385, max=48416, avg=17019.12, stdev=3672.79 00:26:31.548 clat (usec): min=4561, max=90489, avg=14696.96, stdev=11360.24 00:26:31.548 lat (usec): min=4572, max=90507, avg=14713.97, stdev=11360.38 00:26:31.548 clat percentiles (usec): 00:26:31.548 | 1.00th=[ 5538], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9372], 00:26:31.548 | 30.00th=[10159], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:26:31.548 | 70.00th=[13435], 80.00th=[14222], 90.00th=[16057], 95.00th=[51643], 00:26:31.548 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[90702], 00:26:31.548 | 99.99th=[90702] 00:26:31.548 bw ( KiB/s): min=18688, max=35840, per=33.26%, avg=26060.80, stdev=5839.94, samples=10 00:26:31.548 iops : min= 146, max= 280, avg=203.60, stdev=45.62, samples=10 00:26:31.548 lat (msec) : 10=28.82%, 20=63.63%, 50=0.69%, 100=6.86% 00:26:31.548 cpu : usr=95.84%, sys=3.78%, ctx=9, majf=0, minf=89 00:26:31.548 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.548 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.548 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.548 filename0: (groupid=0, jobs=1): err= 0: pid=3986790: Fri Apr 19 04:17:44 2024 00:26:31.548 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(125MiB/5044msec) 00:26:31.548 slat (nsec): min=5501, max=79820, avg=9648.78, stdev=3466.90 00:26:31.548 clat (usec): min=4876, max=59712, avg=15045.82, stdev=11811.95 00:26:31.548 lat (usec): min=4882, max=59724, avg=15055.47, stdev=11812.42 00:26:31.548 clat percentiles (usec): 00:26:31.548 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 7832], 20.00th=[ 9241], 00:26:31.548 | 30.00th=[10028], 40.00th=[11600], 50.00th=[12518], 60.00th=[13173], 00:26:31.548 | 70.00th=[13829], 80.00th=[14746], 90.00th=[17433], 95.00th=[52691], 00:26:31.548 | 99.00th=[57410], 99.50th=[58459], 99.90th=[58983], 99.95th=[59507], 00:26:31.548 | 99.99th=[59507] 00:26:31.548 bw ( KiB/s): min=20224, max=31488, per=32.67%, avg=25604.60, stdev=3596.61, samples=10 00:26:31.548 iops : min= 158, max= 246, avg=200.00, stdev=28.13, samples=10 00:26:31.548 lat (msec) : 10=29.74%, 20=62.28%, 50=0.40%, 100=7.58% 00:26:31.548 cpu : usr=96.91%, sys=2.76%, ctx=19, majf=0, minf=138 00:26:31.548 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.548 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.548 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.548 filename0: (groupid=0, jobs=1): err= 0: pid=3986791: Fri Apr 19 04:17:44 2024 00:26:31.548 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(133MiB/5004msec) 00:26:31.548 slat (nsec): min=5528, max=29800, avg=10316.25, stdev=3169.98 00:26:31.548 clat (usec): min=5426, max=94215, avg=14067.00, stdev=9989.60 00:26:31.548 lat (usec): min=5433, max=94227, avg=14077.31, stdev=9989.87 00:26:31.548 clat percentiles (usec): 00:26:31.548 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 9241], 00:26:31.548 | 30.00th=[10028], 40.00th=[11076], 50.00th=[12256], 60.00th=[13435], 00:26:31.548 | 70.00th=[14222], 80.00th=[15401], 90.00th=[17957], 95.00th=[20841], 00:26:31.548 | 99.00th=[56361], 99.50th=[57410], 99.90th=[60031], 99.95th=[93848], 00:26:31.548 | 99.99th=[93848] 00:26:31.549 bw ( KiB/s): min=22272, max=33280, per=34.75%, avg=27232.80, stdev=3465.53, samples=10 00:26:31.549 iops : min= 174, max= 260, avg=212.70, stdev=27.06, samples=10 00:26:31.549 lat (msec) : 10=30.30%, 20=63.98%, 50=1.22%, 100=4.50% 00:26:31.549 cpu : usr=94.76%, sys=3.82%, ctx=361, majf=0, minf=114 00:26:31.549 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.549 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.549 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:31.549 00:26:31.549 Run status group 0 (all jobs): 00:26:31.549 READ: bw=76.5MiB/s (80.2MB/s), 24.8MiB/s-26.6MiB/s (26.0MB/s-27.9MB/s), io=386MiB (405MB), run=5004-5044msec 00:26:31.549 04:17:45 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:31.549 04:17:45 -- target/dif.sh@43 -- # local sub 00:26:31.549 04:17:45 -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.549 04:17:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:31.549 04:17:45 -- target/dif.sh@36 -- # local sub_id=0 00:26:31.549 04:17:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # bs=4k 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # numjobs=8 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # iodepth=16 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # runtime= 00:26:31.549 04:17:45 -- target/dif.sh@109 -- # files=2 00:26:31.549 04:17:45 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:31.549 04:17:45 -- target/dif.sh@28 -- # local sub 00:26:31.549 04:17:45 -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.549 04:17:45 -- target/dif.sh@31 -- # create_subsystem 0 00:26:31.549 04:17:45 -- target/dif.sh@18 -- # local sub_id=0 00:26:31.549 04:17:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 bdev_null0 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 [2024-04-19 04:17:45.059791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.549 04:17:45 -- target/dif.sh@31 -- # create_subsystem 1 00:26:31.549 04:17:45 -- target/dif.sh@18 -- # local sub_id=1 00:26:31.549 04:17:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 bdev_null1 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.549 04:17:45 -- target/dif.sh@31 -- # create_subsystem 2 00:26:31.549 04:17:45 -- target/dif.sh@18 -- # local sub_id=2 00:26:31.549 04:17:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 bdev_null2 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:31.549 04:17:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.549 04:17:45 -- common/autotest_common.sh@10 -- # set +x 00:26:31.549 04:17:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.549 04:17:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:31.549 04:17:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:31.549 04:17:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:31.549 04:17:45 -- nvmf/common.sh@521 -- # config=() 00:26:31.549 04:17:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.549 04:17:45 -- nvmf/common.sh@521 -- # local subsystem config 00:26:31.549 04:17:45 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.549 04:17:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:31.549 04:17:45 -- target/dif.sh@82 -- # gen_fio_conf 00:26:31.549 04:17:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:31.549 { 00:26:31.549 "params": { 00:26:31.549 "name": "Nvme$subsystem", 00:26:31.549 "trtype": "$TEST_TRANSPORT", 00:26:31.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.549 "adrfam": "ipv4", 00:26:31.549 "trsvcid": "$NVMF_PORT", 00:26:31.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.549 "hdgst": ${hdgst:-false}, 00:26:31.549 "ddgst": ${ddgst:-false} 00:26:31.549 }, 00:26:31.549 "method": "bdev_nvme_attach_controller" 00:26:31.549 } 00:26:31.549 EOF 00:26:31.549 )") 00:26:31.549 04:17:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:31.549 04:17:45 -- target/dif.sh@54 -- # local file 00:26:31.549 04:17:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:31.549 04:17:45 -- target/dif.sh@56 -- # cat 00:26:31.549 04:17:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:31.549 04:17:45 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.549 04:17:45 -- common/autotest_common.sh@1327 -- # shift 00:26:31.549 04:17:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:31.549 04:17:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.549 04:17:45 -- nvmf/common.sh@543 -- # cat 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:31.549 04:17:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.549 04:17:45 -- target/dif.sh@73 -- # cat 00:26:31.549 04:17:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:31.549 04:17:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:31.549 04:17:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:31.549 04:17:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:31.549 { 00:26:31.549 "params": { 00:26:31.549 "name": "Nvme$subsystem", 00:26:31.549 "trtype": "$TEST_TRANSPORT", 00:26:31.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.549 "adrfam": "ipv4", 00:26:31.549 "trsvcid": "$NVMF_PORT", 00:26:31.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.549 "hdgst": ${hdgst:-false}, 00:26:31.549 "ddgst": ${ddgst:-false} 00:26:31.549 }, 00:26:31.549 "method": "bdev_nvme_attach_controller" 00:26:31.549 } 00:26:31.549 EOF 00:26:31.549 )") 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file++ )) 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.549 04:17:45 -- target/dif.sh@73 -- # cat 00:26:31.549 04:17:45 -- nvmf/common.sh@543 -- # cat 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file++ )) 00:26:31.549 04:17:45 -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.549 04:17:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:31.549 04:17:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:31.549 { 00:26:31.549 "params": { 00:26:31.549 "name": "Nvme$subsystem", 00:26:31.549 "trtype": "$TEST_TRANSPORT", 00:26:31.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.549 "adrfam": "ipv4", 00:26:31.550 "trsvcid": "$NVMF_PORT", 00:26:31.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.550 "hdgst": ${hdgst:-false}, 00:26:31.550 "ddgst": ${ddgst:-false} 00:26:31.550 }, 00:26:31.550 "method": "bdev_nvme_attach_controller" 00:26:31.550 } 00:26:31.550 EOF 00:26:31.550 )") 00:26:31.550 04:17:45 -- nvmf/common.sh@543 -- # cat 00:26:31.550 04:17:45 -- nvmf/common.sh@545 -- # jq . 00:26:31.550 04:17:45 -- nvmf/common.sh@546 -- # IFS=, 00:26:31.550 04:17:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:31.550 "params": { 00:26:31.550 "name": "Nvme0", 00:26:31.550 "trtype": "tcp", 00:26:31.550 "traddr": "10.0.0.2", 00:26:31.550 "adrfam": "ipv4", 00:26:31.550 "trsvcid": "4420", 00:26:31.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:31.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:31.550 "hdgst": false, 00:26:31.550 "ddgst": false 00:26:31.550 }, 00:26:31.550 "method": "bdev_nvme_attach_controller" 00:26:31.550 },{ 00:26:31.550 "params": { 00:26:31.550 "name": "Nvme1", 00:26:31.550 "trtype": "tcp", 00:26:31.550 "traddr": "10.0.0.2", 00:26:31.550 "adrfam": "ipv4", 00:26:31.550 "trsvcid": "4420", 00:26:31.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.550 "hdgst": false, 00:26:31.550 "ddgst": false 00:26:31.550 }, 00:26:31.550 "method": "bdev_nvme_attach_controller" 00:26:31.550 },{ 00:26:31.550 "params": { 00:26:31.550 "name": "Nvme2", 00:26:31.550 "trtype": "tcp", 00:26:31.550 "traddr": "10.0.0.2", 00:26:31.550 "adrfam": "ipv4", 00:26:31.550 "trsvcid": "4420", 00:26:31.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:31.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:31.550 "hdgst": false, 00:26:31.550 "ddgst": false 00:26:31.550 }, 00:26:31.550 "method": "bdev_nvme_attach_controller" 00:26:31.550 }' 00:26:31.550 04:17:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:31.550 04:17:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:31.550 04:17:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.550 04:17:45 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.550 04:17:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:31.550 04:17:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:31.550 04:17:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:31.550 04:17:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:31.550 04:17:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:31.550 04:17:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.550 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:31.550 ... 00:26:31.550 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:31.550 ... 00:26:31.550 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:31.550 ... 00:26:31.550 fio-3.35 00:26:31.550 Starting 24 threads 00:26:31.550 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.746 00:26:43.746 filename0: (groupid=0, jobs=1): err= 0: pid=3988231: Fri Apr 19 04:17:56 2024 00:26:43.746 read: IOPS=421, BW=1685KiB/s (1725kB/s)(16.5MiB/10030msec) 00:26:43.746 slat (nsec): min=6865, max=81668, avg=18730.33, stdev=13124.81 00:26:43.746 clat (usec): min=9164, max=49226, avg=37840.77, stdev=2725.25 00:26:43.746 lat (usec): min=9174, max=49239, avg=37859.50, stdev=2725.20 00:26:43.746 clat percentiles (usec): 00:26:43.747 | 1.00th=[23200], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:26:43.747 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:26:43.747 | 99.00th=[39584], 99.50th=[40633], 99.90th=[49021], 99.95th=[49021], 00:26:43.747 | 99.99th=[49021] 00:26:43.747 bw ( KiB/s): min= 1660, max= 1792, per=4.20%, avg=1682.80, stdev=47.08, samples=20 00:26:43.747 iops : min= 415, max= 448, avg=420.70, stdev=11.77, samples=20 00:26:43.747 lat (msec) : 10=0.38%, 20=0.38%, 50=99.24% 00:26:43.747 cpu : usr=98.55%, sys=0.89%, ctx=100, majf=0, minf=50 00:26:43.747 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988232: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10026msec) 00:26:43.747 slat (usec): min=10, max=131, avg=50.06, stdev=17.87 00:26:43.747 clat (usec): min=32413, max=67213, avg=37962.56, stdev=1876.10 00:26:43.747 lat (usec): min=32444, max=67237, avg=38012.62, stdev=1874.19 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.747 | 99.00th=[39584], 99.50th=[40109], 99.90th=[67634], 99.95th=[67634], 00:26:43.747 | 99.99th=[67634] 00:26:43.747 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.45, stdev=41.57, samples=20 00:26:43.747 iops : min= 384, max= 448, avg=415.85, stdev=10.39, samples=20 00:26:43.747 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.747 cpu : usr=97.43%, sys=1.55%, ctx=61, majf=0, minf=34 00:26:43.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988233: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=421, BW=1685KiB/s (1725kB/s)(16.5MiB/10006msec) 00:26:43.747 slat (usec): min=5, max=142, avg=24.84, stdev=19.92 00:26:43.747 clat (usec): min=17002, max=81883, avg=37900.46, stdev=3295.94 00:26:43.747 lat (usec): min=17013, max=81901, avg=37925.30, stdev=3295.53 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[26084], 5.00th=[31589], 10.00th=[37487], 20.00th=[38011], 00:26:43.747 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:26:43.747 | 99.00th=[45876], 99.50th=[57410], 99.90th=[60556], 99.95th=[60556], 00:26:43.747 | 99.99th=[82314] 00:26:43.747 bw ( KiB/s): min= 1552, max= 1776, per=4.19%, avg=1679.37, stdev=44.89, samples=19 00:26:43.747 iops : min= 388, max= 444, avg=419.84, stdev=11.22, samples=19 00:26:43.747 lat (msec) : 20=0.14%, 50=99.10%, 100=0.76% 00:26:43.747 cpu : usr=97.30%, sys=1.63%, ctx=291, majf=0, minf=39 00:26:43.747 IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=80.9%, 16=18.0%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=89.4%, 8=10.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988234: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10026msec) 00:26:43.747 slat (usec): min=9, max=173, avg=49.36, stdev=17.84 00:26:43.747 clat (usec): min=30262, max=75144, avg=37952.12, stdev=1951.81 00:26:43.747 lat (usec): min=30273, max=75165, avg=38001.49, stdev=1950.47 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[37487], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.747 | 99.00th=[39060], 99.50th=[40109], 99.90th=[67634], 99.95th=[67634], 00:26:43.747 | 99.99th=[74974] 00:26:43.747 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.45, stdev=41.57, samples=20 00:26:43.747 iops : min= 384, max= 448, avg=415.85, stdev=10.39, samples=20 00:26:43.747 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.747 cpu : usr=98.73%, sys=0.76%, ctx=55, majf=0, minf=31 00:26:43.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988235: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=416, BW=1667KiB/s (1707kB/s)(16.3MiB/10018msec) 00:26:43.747 slat (usec): min=8, max=106, avg=43.51, stdev=23.62 00:26:43.747 clat (usec): min=24263, max=72415, avg=38040.78, stdev=1673.13 00:26:43.747 lat (usec): min=24275, max=72437, avg=38084.28, stdev=1669.47 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.747 | 99.00th=[40109], 99.50th=[40633], 99.90th=[58459], 99.95th=[58459], 00:26:43.747 | 99.99th=[72877] 00:26:43.747 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1668.95, stdev=47.52, samples=20 00:26:43.747 iops : min= 384, max= 448, avg=417.20, stdev=11.80, samples=20 00:26:43.747 lat (msec) : 50=99.52%, 100=0.48% 00:26:43.747 cpu : usr=98.68%, sys=0.91%, ctx=25, majf=0, minf=35 00:26:43.747 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988236: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10005msec) 00:26:43.747 slat (nsec): min=9599, max=94427, avg=41406.26, stdev=19971.40 00:26:43.747 clat (usec): min=30706, max=47721, avg=38007.07, stdev=849.53 00:26:43.747 lat (usec): min=30724, max=47748, avg=38048.47, stdev=844.24 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.747 | 99.00th=[39584], 99.50th=[40109], 99.90th=[47449], 99.95th=[47449], 00:26:43.747 | 99.99th=[47973] 00:26:43.747 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1670.53, stdev=51.28, samples=19 00:26:43.747 iops : min= 384, max= 448, avg=417.63, stdev=12.82, samples=19 00:26:43.747 lat (msec) : 50=100.00% 00:26:43.747 cpu : usr=98.91%, sys=0.71%, ctx=19, majf=0, minf=26 00:26:43.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988237: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10006msec) 00:26:43.747 slat (nsec): min=7712, max=83046, avg=31934.13, stdev=16393.68 00:26:43.747 clat (usec): min=24304, max=60523, avg=38042.86, stdev=1715.88 00:26:43.747 lat (usec): min=24322, max=60537, avg=38074.79, stdev=1713.57 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[36439], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.747 | 99.00th=[39584], 99.50th=[40633], 99.90th=[60556], 99.95th=[60556], 00:26:43.747 | 99.99th=[60556] 00:26:43.747 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.37, stdev=42.69, samples=19 00:26:43.747 iops : min= 384, max= 448, avg=415.84, stdev=10.67, samples=19 00:26:43.747 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.747 cpu : usr=98.51%, sys=0.89%, ctx=100, majf=0, minf=29 00:26:43.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.747 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.747 filename0: (groupid=0, jobs=1): err= 0: pid=3988238: Fri Apr 19 04:17:56 2024 00:26:43.747 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10010msec) 00:26:43.747 slat (usec): min=5, max=103, avg=51.04, stdev=22.72 00:26:43.747 clat (usec): min=31159, max=51626, avg=37859.59, stdev=1043.73 00:26:43.747 lat (usec): min=31177, max=51649, avg=37910.63, stdev=1043.19 00:26:43.747 clat percentiles (usec): 00:26:43.747 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.747 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:26:43.747 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.747 | 99.00th=[39584], 99.50th=[40109], 99.90th=[51643], 99.95th=[51643], 00:26:43.747 | 99.99th=[51643] 00:26:43.747 bw ( KiB/s): min= 1539, max= 1792, per=4.17%, avg=1669.55, stdev=48.10, samples=20 00:26:43.747 iops : min= 384, max= 448, avg=417.35, stdev=12.13, samples=20 00:26:43.747 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.747 cpu : usr=98.74%, sys=0.87%, ctx=14, majf=0, minf=31 00:26:43.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988239: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=416, BW=1668KiB/s (1708kB/s)(16.3MiB/10017msec) 00:26:43.748 slat (usec): min=8, max=114, avg=47.74, stdev=17.51 00:26:43.748 clat (usec): min=24320, max=72708, avg=37974.47, stdev=1586.09 00:26:43.748 lat (usec): min=24332, max=72731, avg=38022.21, stdev=1583.61 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.748 | 99.00th=[40109], 99.50th=[40633], 99.90th=[58983], 99.95th=[58983], 00:26:43.748 | 99.99th=[72877] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.79, stdev=42.68, samples=19 00:26:43.748 iops : min= 384, max= 448, avg=415.95, stdev=10.67, samples=19 00:26:43.748 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.748 cpu : usr=98.24%, sys=1.09%, ctx=49, majf=0, minf=32 00:26:43.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988240: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=416, BW=1664KiB/s (1704kB/s)(16.3MiB/10033msec) 00:26:43.748 slat (usec): min=9, max=100, avg=48.00, stdev=18.85 00:26:43.748 clat (usec): min=29582, max=67255, avg=38079.11, stdev=1966.72 00:26:43.748 lat (usec): min=29597, max=67275, avg=38127.12, stdev=1964.36 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[37487], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.748 | 99.00th=[40109], 99.50th=[45876], 99.90th=[67634], 99.95th=[67634], 00:26:43.748 | 99.99th=[67634] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1776, per=4.15%, avg=1663.45, stdev=39.23, samples=20 00:26:43.748 iops : min= 384, max= 444, avg=415.85, stdev= 9.81, samples=20 00:26:43.748 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.748 cpu : usr=98.91%, sys=0.66%, ctx=29, majf=0, minf=56 00:26:43.748 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988241: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10006msec) 00:26:43.748 slat (nsec): min=9502, max=91577, avg=40318.48, stdev=20414.23 00:26:43.748 clat (usec): min=30729, max=49701, avg=38029.56, stdev=922.98 00:26:43.748 lat (usec): min=30747, max=49728, avg=38069.88, stdev=918.41 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[37487], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.748 | 99.00th=[39584], 99.50th=[40109], 99.90th=[49546], 99.95th=[49546], 00:26:43.748 | 99.99th=[49546] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1670.53, stdev=51.28, samples=19 00:26:43.748 iops : min= 384, max= 448, avg=417.63, stdev=12.82, samples=19 00:26:43.748 lat (msec) : 50=100.00% 00:26:43.748 cpu : usr=94.88%, sys=2.62%, ctx=130, majf=0, minf=41 00:26:43.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988242: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10026msec) 00:26:43.748 slat (usec): min=8, max=101, avg=49.95, stdev=17.15 00:26:43.748 clat (usec): min=32313, max=67122, avg=37988.31, stdev=1867.81 00:26:43.748 lat (usec): min=32378, max=67145, avg=38038.26, stdev=1865.19 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.748 | 99.00th=[39060], 99.50th=[40109], 99.90th=[66847], 99.95th=[66847], 00:26:43.748 | 99.99th=[67634] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.45, stdev=41.57, samples=20 00:26:43.748 iops : min= 384, max= 448, avg=415.85, stdev=10.39, samples=20 00:26:43.748 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.748 cpu : usr=98.88%, sys=0.72%, ctx=17, majf=0, minf=38 00:26:43.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988243: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10010msec) 00:26:43.748 slat (nsec): min=5563, max=87533, avg=47206.23, stdev=16181.07 00:26:43.748 clat (usec): min=24363, max=65327, avg=37933.73, stdev=1198.46 00:26:43.748 lat (usec): min=24373, max=65342, avg=37980.93, stdev=1196.51 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.748 | 99.00th=[40109], 99.50th=[40633], 99.90th=[51643], 99.95th=[51643], 00:26:43.748 | 99.99th=[65274] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.79, stdev=42.68, samples=19 00:26:43.748 iops : min= 384, max= 448, avg=415.95, stdev=10.67, samples=19 00:26:43.748 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.748 cpu : usr=98.67%, sys=0.81%, ctx=41, majf=0, minf=29 00:26:43.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988244: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=416, BW=1665KiB/s (1705kB/s)(16.3MiB/10031msec) 00:26:43.748 slat (usec): min=5, max=121, avg=33.33, stdev=22.38 00:26:43.748 clat (usec): min=31866, max=65425, avg=38146.14, stdev=1861.43 00:26:43.748 lat (usec): min=31890, max=65441, avg=38179.47, stdev=1859.02 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.748 | 99.00th=[39584], 99.50th=[46924], 99.90th=[65274], 99.95th=[65274], 00:26:43.748 | 99.99th=[65274] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.80, stdev=41.54, samples=20 00:26:43.748 iops : min= 384, max= 448, avg=415.95, stdev=10.38, samples=20 00:26:43.748 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.748 cpu : usr=98.85%, sys=0.75%, ctx=18, majf=0, minf=30 00:26:43.748 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988245: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10006msec) 00:26:43.748 slat (nsec): min=8375, max=90055, avg=30309.14, stdev=16628.28 00:26:43.748 clat (usec): min=22390, max=76126, avg=38076.02, stdev=2023.99 00:26:43.748 lat (usec): min=22417, max=76150, avg=38106.33, stdev=2022.02 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36439], 5.00th=[37487], 10.00th=[37487], 20.00th=[38011], 00:26:43.748 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.748 | 99.00th=[40109], 99.50th=[52691], 99.90th=[60556], 99.95th=[60556], 00:26:43.748 | 99.99th=[76022] 00:26:43.748 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.37, stdev=42.69, samples=19 00:26:43.748 iops : min= 384, max= 448, avg=415.84, stdev=10.67, samples=19 00:26:43.748 lat (msec) : 50=99.47%, 100=0.53% 00:26:43.748 cpu : usr=99.01%, sys=0.60%, ctx=17, majf=0, minf=52 00:26:43.748 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:43.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.748 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.748 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.748 filename1: (groupid=0, jobs=1): err= 0: pid=3988246: Fri Apr 19 04:17:56 2024 00:26:43.748 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10006msec) 00:26:43.748 slat (nsec): min=5507, max=82553, avg=31836.06, stdev=16383.64 00:26:43.748 clat (usec): min=22519, max=76173, avg=38043.29, stdev=1983.16 00:26:43.748 lat (usec): min=22537, max=76186, avg=38075.12, stdev=1981.20 00:26:43.748 clat percentiles (usec): 00:26:43.748 | 1.00th=[36439], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.748 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.748 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.749 | 99.00th=[40109], 99.50th=[40633], 99.90th=[60556], 99.95th=[60556], 00:26:43.749 | 99.99th=[76022] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.37, stdev=42.69, samples=19 00:26:43.749 iops : min= 384, max= 448, avg=415.84, stdev=10.67, samples=19 00:26:43.749 lat (msec) : 50=99.52%, 100=0.48% 00:26:43.749 cpu : usr=98.52%, sys=0.88%, ctx=53, majf=0, minf=32 00:26:43.749 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988247: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=416, BW=1665KiB/s (1705kB/s)(16.3MiB/10031msec) 00:26:43.749 slat (usec): min=8, max=110, avg=52.54, stdev=22.19 00:26:43.749 clat (usec): min=31178, max=73285, avg=37977.23, stdev=2268.75 00:26:43.749 lat (usec): min=31196, max=73303, avg=38029.77, stdev=2265.92 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:26:43.749 | 99.00th=[39584], 99.50th=[40109], 99.90th=[72877], 99.95th=[72877], 00:26:43.749 | 99.99th=[72877] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.80, stdev=41.54, samples=20 00:26:43.749 iops : min= 384, max= 448, avg=415.95, stdev=10.38, samples=20 00:26:43.749 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.749 cpu : usr=98.84%, sys=0.76%, ctx=21, majf=0, minf=34 00:26:43.749 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988248: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=433, BW=1734KiB/s (1776kB/s)(16.9MiB/10006msec) 00:26:43.749 slat (nsec): min=4788, max=84136, avg=28424.15, stdev=16570.06 00:26:43.749 clat (usec): min=15329, max=60780, avg=36658.56, stdev=4983.06 00:26:43.749 lat (usec): min=15348, max=60793, avg=36686.99, stdev=4985.96 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[21103], 5.00th=[26346], 10.00th=[27657], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39584], 00:26:43.749 | 99.00th=[56886], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:26:43.749 | 99.99th=[60556] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1984, per=4.30%, avg=1722.32, stdev=117.56, samples=19 00:26:43.749 iops : min= 384, max= 496, avg=430.58, stdev=29.39, samples=19 00:26:43.749 lat (msec) : 20=0.41%, 50=98.39%, 100=1.20% 00:26:43.749 cpu : usr=98.41%, sys=0.96%, ctx=97, majf=0, minf=34 00:26:43.749 IO depths : 1=4.5%, 2=9.2%, 4=20.0%, 8=57.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988249: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=420, BW=1681KiB/s (1722kB/s)(16.4MiB/10012msec) 00:26:43.749 slat (nsec): min=9368, max=88572, avg=19762.90, stdev=14035.47 00:26:43.749 clat (usec): min=7954, max=40489, avg=37907.64, stdev=2586.61 00:26:43.749 lat (usec): min=7964, max=40514, avg=37927.41, stdev=2586.43 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[29492], 5.00th=[37487], 10.00th=[38011], 20.00th=[38011], 00:26:43.749 | 30.00th=[38011], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38536], 90.00th=[38536], 95.00th=[39060], 00:26:43.749 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:26:43.749 | 99.99th=[40633] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.18%, avg=1676.40, stdev=57.35, samples=20 00:26:43.749 iops : min= 384, max= 448, avg=419.10, stdev=14.34, samples=20 00:26:43.749 lat (msec) : 10=0.59%, 20=0.17%, 50=99.24% 00:26:43.749 cpu : usr=98.68%, sys=0.80%, ctx=78, majf=0, minf=50 00:26:43.749 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988250: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10026msec) 00:26:43.749 slat (usec): min=8, max=102, avg=49.28, stdev=15.36 00:26:43.749 clat (usec): min=30044, max=67220, avg=37986.73, stdev=1885.85 00:26:43.749 lat (usec): min=30060, max=67243, avg=38036.02, stdev=1883.83 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[37487], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.749 | 99.00th=[39060], 99.50th=[40109], 99.90th=[67634], 99.95th=[67634], 00:26:43.749 | 99.99th=[67634] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.45, stdev=41.57, samples=20 00:26:43.749 iops : min= 384, max= 448, avg=415.85, stdev=10.39, samples=20 00:26:43.749 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.749 cpu : usr=99.12%, sys=0.53%, ctx=19, majf=0, minf=31 00:26:43.749 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988251: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=417, BW=1670KiB/s (1710kB/s)(16.3MiB/10004msec) 00:26:43.749 slat (nsec): min=10453, max=91482, avg=45841.89, stdev=18349.11 00:26:43.749 clat (usec): min=30110, max=47733, avg=37965.64, stdev=871.84 00:26:43.749 lat (usec): min=30122, max=47761, avg=38011.48, stdev=868.12 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.749 | 99.00th=[39584], 99.50th=[40109], 99.90th=[47449], 99.95th=[47449], 00:26:43.749 | 99.99th=[47973] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.17%, avg=1670.53, stdev=51.28, samples=19 00:26:43.749 iops : min= 384, max= 448, avg=417.63, stdev=12.82, samples=19 00:26:43.749 lat (msec) : 50=100.00% 00:26:43.749 cpu : usr=98.75%, sys=0.86%, ctx=19, majf=0, minf=33 00:26:43.749 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988252: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=416, BW=1666KiB/s (1706kB/s)(16.3MiB/10025msec) 00:26:43.749 slat (nsec): min=4907, max=89172, avg=47384.00, stdev=16521.93 00:26:43.749 clat (usec): min=31341, max=67311, avg=38002.83, stdev=1909.71 00:26:43.749 lat (usec): min=31361, max=67324, avg=38050.22, stdev=1906.85 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.749 | 99.00th=[40109], 99.50th=[40633], 99.90th=[67634], 99.95th=[67634], 00:26:43.749 | 99.99th=[67634] 00:26:43.749 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.80, stdev=41.54, samples=20 00:26:43.749 iops : min= 384, max= 448, avg=415.95, stdev=10.38, samples=20 00:26:43.749 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.749 cpu : usr=98.63%, sys=0.86%, ctx=60, majf=0, minf=35 00:26:43.749 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.749 filename2: (groupid=0, jobs=1): err= 0: pid=3988253: Fri Apr 19 04:17:56 2024 00:26:43.749 read: IOPS=417, BW=1669KiB/s (1709kB/s)(16.3MiB/10009msec) 00:26:43.749 slat (usec): min=6, max=109, avg=50.84, stdev=23.41 00:26:43.749 clat (usec): min=24887, max=50746, avg=37850.10, stdev=1069.63 00:26:43.749 lat (usec): min=24902, max=50766, avg=37900.94, stdev=1069.66 00:26:43.749 clat percentiles (usec): 00:26:43.749 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:26:43.749 | 30.00th=[37487], 40.00th=[37487], 50.00th=[38011], 60.00th=[38011], 00:26:43.749 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.749 | 99.00th=[39584], 99.50th=[40633], 99.90th=[50594], 99.95th=[50594], 00:26:43.749 | 99.99th=[50594] 00:26:43.749 bw ( KiB/s): min= 1532, max= 1784, per=4.15%, avg=1663.74, stdev=42.44, samples=19 00:26:43.749 iops : min= 383, max= 446, avg=415.89, stdev=10.61, samples=19 00:26:43.749 lat (msec) : 50=99.57%, 100=0.43% 00:26:43.749 cpu : usr=98.83%, sys=0.77%, ctx=17, majf=0, minf=28 00:26:43.749 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:43.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.749 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.750 filename2: (groupid=0, jobs=1): err= 0: pid=3988254: Fri Apr 19 04:17:56 2024 00:26:43.750 read: IOPS=416, BW=1665KiB/s (1705kB/s)(16.3MiB/10031msec) 00:26:43.750 slat (usec): min=7, max=120, avg=50.77, stdev=24.21 00:26:43.750 clat (usec): min=31110, max=73115, avg=38005.54, stdev=2257.42 00:26:43.750 lat (usec): min=31151, max=73134, avg=38056.31, stdev=2253.98 00:26:43.750 clat percentiles (usec): 00:26:43.750 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:26:43.750 | 30.00th=[37487], 40.00th=[38011], 50.00th=[38011], 60.00th=[38011], 00:26:43.750 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[38536], 00:26:43.750 | 99.00th=[39584], 99.50th=[40109], 99.90th=[72877], 99.95th=[72877], 00:26:43.750 | 99.99th=[72877] 00:26:43.750 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1663.80, stdev=41.54, samples=20 00:26:43.750 iops : min= 384, max= 448, avg=415.95, stdev=10.38, samples=20 00:26:43.750 lat (msec) : 50=99.62%, 100=0.38% 00:26:43.750 cpu : usr=98.68%, sys=0.83%, ctx=30, majf=0, minf=26 00:26:43.750 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:43.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.750 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.750 issued rwts: total=4176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:43.750 00:26:43.750 Run status group 0 (all jobs): 00:26:43.750 READ: bw=39.1MiB/s (41.0MB/s), 1664KiB/s-1734KiB/s (1704kB/s-1776kB/s), io=393MiB (412MB), run=10004-10033msec 00:26:43.750 04:17:56 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:43.750 04:17:56 -- target/dif.sh@43 -- # local sub 00:26:43.750 04:17:56 -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.750 04:17:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:43.750 04:17:56 -- target/dif.sh@36 -- # local sub_id=0 00:26:43.750 04:17:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:56 -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.750 04:17:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:43.750 04:17:56 -- target/dif.sh@36 -- # local sub_id=1 00:26:43.750 04:17:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:56 -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.750 04:17:56 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:43.750 04:17:56 -- target/dif.sh@36 -- # local sub_id=2 00:26:43.750 04:17:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:43.750 04:17:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # numjobs=2 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # iodepth=8 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # runtime=5 00:26:43.750 04:17:57 -- target/dif.sh@115 -- # files=1 00:26:43.750 04:17:57 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:43.750 04:17:57 -- target/dif.sh@28 -- # local sub 00:26:43.750 04:17:57 -- target/dif.sh@30 -- # for sub in "$@" 00:26:43.750 04:17:57 -- target/dif.sh@31 -- # create_subsystem 0 00:26:43.750 04:17:57 -- target/dif.sh@18 -- # local sub_id=0 00:26:43.750 04:17:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 bdev_null0 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 [2024-04-19 04:17:57.031940] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@30 -- # for sub in "$@" 00:26:43.750 04:17:57 -- target/dif.sh@31 -- # create_subsystem 1 00:26:43.750 04:17:57 -- target/dif.sh@18 -- # local sub_id=1 00:26:43.750 04:17:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 bdev_null1 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.750 04:17:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.750 04:17:57 -- common/autotest_common.sh@10 -- # set +x 00:26:43.750 04:17:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.750 04:17:57 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:43.750 04:17:57 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:43.750 04:17:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:43.750 04:17:57 -- nvmf/common.sh@521 -- # config=() 00:26:43.750 04:17:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:43.750 04:17:57 -- nvmf/common.sh@521 -- # local subsystem config 00:26:43.750 04:17:57 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:43.750 04:17:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:43.750 04:17:57 -- target/dif.sh@82 -- # gen_fio_conf 00:26:43.750 04:17:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:43.750 { 00:26:43.750 "params": { 00:26:43.750 "name": "Nvme$subsystem", 00:26:43.750 "trtype": "$TEST_TRANSPORT", 00:26:43.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.750 "adrfam": "ipv4", 00:26:43.750 "trsvcid": "$NVMF_PORT", 00:26:43.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.750 "hdgst": ${hdgst:-false}, 00:26:43.750 "ddgst": ${ddgst:-false} 00:26:43.750 }, 00:26:43.750 "method": "bdev_nvme_attach_controller" 00:26:43.750 } 00:26:43.750 EOF 00:26:43.750 )") 00:26:43.750 04:17:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:43.750 04:17:57 -- target/dif.sh@54 -- # local file 00:26:43.750 04:17:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:43.750 04:17:57 -- target/dif.sh@56 -- # cat 00:26:43.750 04:17:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:43.750 04:17:57 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:43.750 04:17:57 -- common/autotest_common.sh@1327 -- # shift 00:26:43.750 04:17:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:43.750 04:17:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.750 04:17:57 -- nvmf/common.sh@543 -- # cat 00:26:43.750 04:17:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:43.750 04:17:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:43.750 04:17:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:43.750 04:17:57 -- target/dif.sh@72 -- # (( file <= files )) 00:26:43.750 04:17:57 -- target/dif.sh@73 -- # cat 00:26:43.750 04:17:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:43.750 04:17:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:43.750 04:17:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:43.750 { 00:26:43.750 "params": { 00:26:43.750 "name": "Nvme$subsystem", 00:26:43.750 "trtype": "$TEST_TRANSPORT", 00:26:43.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:43.750 "adrfam": "ipv4", 00:26:43.750 "trsvcid": "$NVMF_PORT", 00:26:43.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:43.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:43.751 "hdgst": ${hdgst:-false}, 00:26:43.751 "ddgst": ${ddgst:-false} 00:26:43.751 }, 00:26:43.751 "method": "bdev_nvme_attach_controller" 00:26:43.751 } 00:26:43.751 EOF 00:26:43.751 )") 00:26:43.751 04:17:57 -- nvmf/common.sh@543 -- # cat 00:26:43.751 04:17:57 -- target/dif.sh@72 -- # (( file++ )) 00:26:43.751 04:17:57 -- target/dif.sh@72 -- # (( file <= files )) 00:26:43.751 04:17:57 -- nvmf/common.sh@545 -- # jq . 00:26:43.751 04:17:57 -- nvmf/common.sh@546 -- # IFS=, 00:26:43.751 04:17:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:43.751 "params": { 00:26:43.751 "name": "Nvme0", 00:26:43.751 "trtype": "tcp", 00:26:43.751 "traddr": "10.0.0.2", 00:26:43.751 "adrfam": "ipv4", 00:26:43.751 "trsvcid": "4420", 00:26:43.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:43.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:43.751 "hdgst": false, 00:26:43.751 "ddgst": false 00:26:43.751 }, 00:26:43.751 "method": "bdev_nvme_attach_controller" 00:26:43.751 },{ 00:26:43.751 "params": { 00:26:43.751 "name": "Nvme1", 00:26:43.751 "trtype": "tcp", 00:26:43.751 "traddr": "10.0.0.2", 00:26:43.751 "adrfam": "ipv4", 00:26:43.751 "trsvcid": "4420", 00:26:43.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:43.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:43.751 "hdgst": false, 00:26:43.751 "ddgst": false 00:26:43.751 }, 00:26:43.751 "method": "bdev_nvme_attach_controller" 00:26:43.751 }' 00:26:43.751 04:17:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:43.751 04:17:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:43.751 04:17:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.751 04:17:57 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:43.751 04:17:57 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:43.751 04:17:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:43.751 04:17:57 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:43.751 04:17:57 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:43.751 04:17:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:43.751 04:17:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:43.751 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:43.751 ... 00:26:43.751 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:43.751 ... 00:26:43.751 fio-3.35 00:26:43.751 Starting 4 threads 00:26:43.751 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.059 00:26:49.059 filename0: (groupid=0, jobs=1): err= 0: pid=3990472: Fri Apr 19 04:18:03 2024 00:26:49.059 read: IOPS=1731, BW=13.5MiB/s (14.2MB/s)(67.6MiB/5001msec) 00:26:49.059 slat (usec): min=9, max=169, avg=21.40, stdev=13.75 00:26:49.059 clat (usec): min=873, max=8640, avg=4558.75, stdev=564.84 00:26:49.059 lat (usec): min=883, max=8663, avg=4580.14, stdev=564.24 00:26:49.059 clat percentiles (usec): 00:26:49.059 | 1.00th=[ 3097], 5.00th=[ 3720], 10.00th=[ 4015], 20.00th=[ 4228], 00:26:49.059 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:26:49.059 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5538], 00:26:49.059 | 99.00th=[ 6587], 99.50th=[ 7111], 99.90th=[ 7767], 99.95th=[ 8029], 00:26:49.059 | 99.99th=[ 8586] 00:26:49.059 bw ( KiB/s): min=13424, max=14160, per=24.85%, avg=13768.89, stdev=231.46, samples=9 00:26:49.059 iops : min= 1678, max= 1770, avg=1721.11, stdev=28.93, samples=9 00:26:49.059 lat (usec) : 1000=0.01% 00:26:49.059 lat (msec) : 2=0.10%, 4=9.39%, 10=90.49% 00:26:49.059 cpu : usr=94.86%, sys=3.40%, ctx=24, majf=0, minf=9 00:26:49.059 IO depths : 1=0.1%, 2=7.8%, 4=62.4%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.059 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.059 issued rwts: total=8657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.059 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:49.059 filename0: (groupid=0, jobs=1): err= 0: pid=3990473: Fri Apr 19 04:18:03 2024 00:26:49.059 read: IOPS=1772, BW=13.8MiB/s (14.5MB/s)(69.3MiB/5004msec) 00:26:49.059 slat (usec): min=8, max=177, avg=19.93, stdev=13.58 00:26:49.059 clat (usec): min=853, max=8223, avg=4449.75, stdev=663.65 00:26:49.059 lat (usec): min=869, max=8255, avg=4469.68, stdev=664.56 00:26:49.060 clat percentiles (usec): 00:26:49.060 | 1.00th=[ 2769], 5.00th=[ 3326], 10.00th=[ 3687], 20.00th=[ 4080], 00:26:49.060 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:26:49.060 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5473], 00:26:49.060 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 8094], 00:26:49.060 | 99.99th=[ 8225] 00:26:49.060 bw ( KiB/s): min=13632, max=15344, per=25.74%, avg=14261.33, stdev=618.18, samples=9 00:26:49.060 iops : min= 1704, max= 1918, avg=1782.67, stdev=77.27, samples=9 00:26:49.060 lat (usec) : 1000=0.01% 00:26:49.060 lat (msec) : 2=0.12%, 4=17.72%, 10=82.14% 00:26:49.060 cpu : usr=96.78%, sys=2.82%, ctx=9, majf=0, minf=0 00:26:49.060 IO depths : 1=0.4%, 2=8.5%, 4=64.6%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 issued rwts: total=8869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.060 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:49.060 filename1: (groupid=0, jobs=1): err= 0: pid=3990474: Fri Apr 19 04:18:03 2024 00:26:49.060 read: IOPS=1710, BW=13.4MiB/s (14.0MB/s)(66.8MiB/5003msec) 00:26:49.060 slat (nsec): min=8546, max=80382, avg=21723.46, stdev=14739.99 00:26:49.060 clat (usec): min=853, max=8565, avg=4603.36, stdev=711.98 00:26:49.060 lat (usec): min=870, max=8622, avg=4625.08, stdev=711.38 00:26:49.060 clat percentiles (usec): 00:26:49.060 | 1.00th=[ 2966], 5.00th=[ 3654], 10.00th=[ 4015], 20.00th=[ 4228], 00:26:49.060 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:26:49.060 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5211], 95.00th=[ 6194], 00:26:49.060 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 8160], 99.95th=[ 8291], 00:26:49.060 | 99.99th=[ 8586] 00:26:49.060 bw ( KiB/s): min=13360, max=14032, per=24.62%, avg=13643.78, stdev=231.76, samples=9 00:26:49.060 iops : min= 1670, max= 1754, avg=1705.44, stdev=29.00, samples=9 00:26:49.060 lat (usec) : 1000=0.02% 00:26:49.060 lat (msec) : 2=0.14%, 4=9.36%, 10=90.47% 00:26:49.060 cpu : usr=96.98%, sys=2.62%, ctx=8, majf=0, minf=9 00:26:49.060 IO depths : 1=0.2%, 2=9.1%, 4=63.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 issued rwts: total=8556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.060 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:49.060 filename1: (groupid=0, jobs=1): err= 0: pid=3990475: Fri Apr 19 04:18:03 2024 00:26:49.060 read: IOPS=1714, BW=13.4MiB/s (14.0MB/s)(67.0MiB/5002msec) 00:26:49.060 slat (usec): min=9, max=180, avg=21.47, stdev=14.82 00:26:49.060 clat (usec): min=750, max=8212, avg=4591.76, stdev=703.20 00:26:49.060 lat (usec): min=762, max=8228, avg=4613.23, stdev=703.04 00:26:49.060 clat percentiles (usec): 00:26:49.060 | 1.00th=[ 2868], 5.00th=[ 3687], 10.00th=[ 4015], 20.00th=[ 4228], 00:26:49.060 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4555], 00:26:49.060 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 6128], 00:26:49.060 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 7963], 00:26:49.060 | 99.99th=[ 8225] 00:26:49.060 bw ( KiB/s): min=13168, max=14016, per=24.65%, avg=13658.67, stdev=292.08, samples=9 00:26:49.060 iops : min= 1646, max= 1752, avg=1707.33, stdev=36.51, samples=9 00:26:49.060 lat (usec) : 1000=0.06% 00:26:49.060 lat (msec) : 2=0.33%, 4=9.53%, 10=90.09% 00:26:49.060 cpu : usr=97.22%, sys=2.36%, ctx=11, majf=0, minf=11 00:26:49.060 IO depths : 1=0.1%, 2=10.3%, 4=62.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.060 issued rwts: total=8575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.060 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:49.060 00:26:49.060 Run status group 0 (all jobs): 00:26:49.060 READ: bw=54.1MiB/s (56.7MB/s), 13.4MiB/s-13.8MiB/s (14.0MB/s-14.5MB/s), io=271MiB (284MB), run=5001-5004msec 00:26:49.060 04:18:03 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:49.060 04:18:03 -- target/dif.sh@43 -- # local sub 00:26:49.060 04:18:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:49.060 04:18:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:49.060 04:18:03 -- target/dif.sh@36 -- # local sub_id=0 00:26:49.060 04:18:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:49.060 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.060 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.060 04:18:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:49.060 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.060 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.060 04:18:03 -- target/dif.sh@45 -- # for sub in "$@" 00:26:49.060 04:18:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:49.060 04:18:03 -- target/dif.sh@36 -- # local sub_id=1 00:26:49.060 04:18:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.060 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.060 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.060 04:18:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:49.060 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.060 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.060 00:26:49.060 real 0m24.775s 00:26:49.060 user 5m9.241s 00:26:49.060 sys 0m4.470s 00:26:49.060 04:18:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.060 ************************************ 00:26:49.060 END TEST fio_dif_rand_params 00:26:49.060 ************************************ 00:26:49.060 04:18:03 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:49.060 04:18:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:49.060 04:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.060 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.327 ************************************ 00:26:49.327 START TEST fio_dif_digest 00:26:49.327 ************************************ 00:26:49.327 04:18:03 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:26:49.328 04:18:03 -- target/dif.sh@123 -- # local NULL_DIF 00:26:49.328 04:18:03 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:49.328 04:18:03 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:49.328 04:18:03 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:49.328 04:18:03 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:49.328 04:18:03 -- target/dif.sh@127 -- # numjobs=3 00:26:49.328 04:18:03 -- target/dif.sh@127 -- # iodepth=3 00:26:49.328 04:18:03 -- target/dif.sh@127 -- # runtime=10 00:26:49.328 04:18:03 -- target/dif.sh@128 -- # hdgst=true 00:26:49.328 04:18:03 -- target/dif.sh@128 -- # ddgst=true 00:26:49.328 04:18:03 -- target/dif.sh@130 -- # create_subsystems 0 00:26:49.328 04:18:03 -- target/dif.sh@28 -- # local sub 00:26:49.328 04:18:03 -- target/dif.sh@30 -- # for sub in "$@" 00:26:49.328 04:18:03 -- target/dif.sh@31 -- # create_subsystem 0 00:26:49.328 04:18:03 -- target/dif.sh@18 -- # local sub_id=0 00:26:49.328 04:18:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:49.328 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.328 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.328 bdev_null0 00:26:49.328 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.328 04:18:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:49.328 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.328 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.328 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.328 04:18:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:49.328 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.328 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.328 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.328 04:18:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:49.328 04:18:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.328 04:18:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.328 [2024-04-19 04:18:03.730299] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.328 04:18:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.328 04:18:03 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:49.328 04:18:03 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:49.328 04:18:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:49.328 04:18:03 -- nvmf/common.sh@521 -- # config=() 00:26:49.328 04:18:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.328 04:18:03 -- nvmf/common.sh@521 -- # local subsystem config 00:26:49.328 04:18:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:49.328 04:18:03 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.328 04:18:03 -- target/dif.sh@82 -- # gen_fio_conf 00:26:49.328 04:18:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:49.328 { 00:26:49.328 "params": { 00:26:49.328 "name": "Nvme$subsystem", 00:26:49.328 "trtype": "$TEST_TRANSPORT", 00:26:49.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.328 "adrfam": "ipv4", 00:26:49.328 "trsvcid": "$NVMF_PORT", 00:26:49.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.328 "hdgst": ${hdgst:-false}, 00:26:49.328 "ddgst": ${ddgst:-false} 00:26:49.328 }, 00:26:49.328 "method": "bdev_nvme_attach_controller" 00:26:49.328 } 00:26:49.328 EOF 00:26:49.328 )") 00:26:49.328 04:18:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:49.328 04:18:03 -- target/dif.sh@54 -- # local file 00:26:49.328 04:18:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:49.328 04:18:03 -- target/dif.sh@56 -- # cat 00:26:49.328 04:18:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:49.328 04:18:03 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:49.328 04:18:03 -- common/autotest_common.sh@1327 -- # shift 00:26:49.328 04:18:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:49.328 04:18:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.328 04:18:03 -- nvmf/common.sh@543 -- # cat 00:26:49.328 04:18:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:49.328 04:18:03 -- target/dif.sh@72 -- # (( file <= files )) 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:49.328 04:18:03 -- nvmf/common.sh@545 -- # jq . 00:26:49.328 04:18:03 -- nvmf/common.sh@546 -- # IFS=, 00:26:49.328 04:18:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:49.328 "params": { 00:26:49.328 "name": "Nvme0", 00:26:49.328 "trtype": "tcp", 00:26:49.328 "traddr": "10.0.0.2", 00:26:49.328 "adrfam": "ipv4", 00:26:49.328 "trsvcid": "4420", 00:26:49.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:49.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:49.328 "hdgst": true, 00:26:49.328 "ddgst": true 00:26:49.328 }, 00:26:49.328 "method": "bdev_nvme_attach_controller" 00:26:49.328 }' 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:49.328 04:18:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:49.328 04:18:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:49.328 04:18:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:49.328 04:18:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:49.328 04:18:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:49.328 04:18:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:49.901 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:49.901 ... 00:26:49.901 fio-3.35 00:26:49.901 Starting 3 threads 00:26:49.901 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.103 00:27:02.103 filename0: (groupid=0, jobs=1): err= 0: pid=3991809: Fri Apr 19 04:18:14 2024 00:27:02.103 read: IOPS=192, BW=24.1MiB/s (25.3MB/s)(241MiB/10012msec) 00:27:02.103 slat (nsec): min=9486, max=91669, avg=19181.49, stdev=7467.21 00:27:02.104 clat (usec): min=9311, max=60659, avg=15547.50, stdev=2733.72 00:27:02.104 lat (usec): min=9327, max=60669, avg=15566.68, stdev=2733.33 00:27:02.104 clat percentiles (usec): 00:27:02.104 | 1.00th=[10421], 5.00th=[13304], 10.00th=[13960], 20.00th=[14615], 00:27:02.104 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:27:02.104 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:27:02.104 | 99.00th=[18220], 99.50th=[18744], 99.90th=[59507], 99.95th=[60556], 00:27:02.104 | 99.99th=[60556] 00:27:02.104 bw ( KiB/s): min=22528, max=26112, per=34.68%, avg=24652.80, stdev=851.49, samples=20 00:27:02.104 iops : min= 176, max= 204, avg=192.60, stdev= 6.65, samples=20 00:27:02.104 lat (msec) : 10=0.41%, 20=99.27%, 100=0.31% 00:27:02.104 cpu : usr=96.48%, sys=3.15%, ctx=22, majf=0, minf=115 00:27:02.104 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 issued rwts: total=1929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.104 filename0: (groupid=0, jobs=1): err= 0: pid=3991810: Fri Apr 19 04:18:14 2024 00:27:02.104 read: IOPS=181, BW=22.6MiB/s (23.7MB/s)(228MiB/10051msec) 00:27:02.104 slat (nsec): min=9525, max=49916, avg=19349.38, stdev=7044.58 00:27:02.104 clat (usec): min=9054, max=60308, avg=16512.31, stdev=2590.47 00:27:02.104 lat (usec): min=9069, max=60326, avg=16531.66, stdev=2590.86 00:27:02.104 clat percentiles (usec): 00:27:02.104 | 1.00th=[10683], 5.00th=[14091], 10.00th=[15008], 20.00th=[15533], 00:27:02.104 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:27:02.104 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:27:02.104 | 99.00th=[19268], 99.50th=[19792], 99.90th=[59507], 99.95th=[60556], 00:27:02.104 | 99.99th=[60556] 00:27:02.104 bw ( KiB/s): min=20992, max=25088, per=32.73%, avg=23270.40, stdev=917.02, samples=20 00:27:02.104 iops : min= 164, max= 196, avg=181.80, stdev= 7.16, samples=20 00:27:02.104 lat (msec) : 10=0.16%, 20=99.40%, 50=0.16%, 100=0.27% 00:27:02.104 cpu : usr=96.20%, sys=3.44%, ctx=15, majf=0, minf=144 00:27:02.104 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 issued rwts: total=1821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.104 filename0: (groupid=0, jobs=1): err= 0: pid=3991811: Fri Apr 19 04:18:14 2024 00:27:02.104 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(229MiB/10049msec) 00:27:02.104 slat (usec): min=9, max=172, avg=20.54, stdev= 8.11 00:27:02.104 clat (usec): min=8920, max=58931, avg=16405.33, stdev=3820.35 00:27:02.104 lat (usec): min=8936, max=58958, avg=16425.87, stdev=3820.82 00:27:02.104 clat percentiles (usec): 00:27:02.104 | 1.00th=[11731], 5.00th=[14091], 10.00th=[14746], 20.00th=[15270], 00:27:02.104 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:27:02.104 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:27:02.104 | 99.00th=[20055], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:27:02.104 | 99.99th=[58983] 00:27:02.104 bw ( KiB/s): min=19968, max=24576, per=32.95%, avg=23426.30, stdev=1148.61, samples=20 00:27:02.104 iops : min= 156, max= 192, avg=183.00, stdev= 8.98, samples=20 00:27:02.104 lat (msec) : 10=0.16%, 20=98.80%, 50=0.27%, 100=0.76% 00:27:02.104 cpu : usr=95.75%, sys=3.89%, ctx=27, majf=0, minf=173 00:27:02.104 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.104 issued rwts: total=1832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.104 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.104 00:27:02.104 Run status group 0 (all jobs): 00:27:02.104 READ: bw=69.4MiB/s (72.8MB/s), 22.6MiB/s-24.1MiB/s (23.7MB/s-25.3MB/s), io=698MiB (732MB), run=10012-10051msec 00:27:02.104 04:18:14 -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:02.104 04:18:14 -- target/dif.sh@43 -- # local sub 00:27:02.104 04:18:14 -- target/dif.sh@45 -- # for sub in "$@" 00:27:02.104 04:18:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:02.104 04:18:14 -- target/dif.sh@36 -- # local sub_id=0 00:27:02.104 04:18:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:02.104 04:18:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.104 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:27:02.104 04:18:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.104 04:18:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:02.104 04:18:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:02.104 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:27:02.104 04:18:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:02.104 00:27:02.104 real 0m11.200s 00:27:02.104 user 0m40.160s 00:27:02.104 sys 0m1.380s 00:27:02.104 04:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:02.104 04:18:14 -- common/autotest_common.sh@10 -- # set +x 00:27:02.104 ************************************ 00:27:02.104 END TEST fio_dif_digest 00:27:02.104 ************************************ 00:27:02.104 04:18:14 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:02.104 04:18:14 -- target/dif.sh@147 -- # nvmftestfini 00:27:02.104 04:18:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:02.104 04:18:14 -- nvmf/common.sh@117 -- # sync 00:27:02.104 04:18:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.104 04:18:14 -- nvmf/common.sh@120 -- # set +e 00:27:02.104 04:18:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.104 04:18:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.104 rmmod nvme_tcp 00:27:02.104 rmmod nvme_fabrics 00:27:02.104 rmmod nvme_keyring 00:27:02.104 04:18:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.104 04:18:14 -- nvmf/common.sh@124 -- # set -e 00:27:02.104 04:18:14 -- nvmf/common.sh@125 -- # return 0 00:27:02.104 04:18:14 -- nvmf/common.sh@478 -- # '[' -n 3981864 ']' 00:27:02.104 04:18:14 -- nvmf/common.sh@479 -- # killprocess 3981864 00:27:02.104 04:18:14 -- common/autotest_common.sh@936 -- # '[' -z 3981864 ']' 00:27:02.104 04:18:14 -- common/autotest_common.sh@940 -- # kill -0 3981864 00:27:02.104 04:18:14 -- common/autotest_common.sh@941 -- # uname 00:27:02.104 04:18:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.104 04:18:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3981864 00:27:02.104 04:18:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.104 04:18:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.104 04:18:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3981864' 00:27:02.104 killing process with pid 3981864 00:27:02.104 04:18:15 -- common/autotest_common.sh@955 -- # kill 3981864 00:27:02.104 04:18:15 -- common/autotest_common.sh@960 -- # wait 3981864 00:27:02.104 04:18:15 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:27:02.104 04:18:15 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:03.484 Waiting for block devices as requested 00:27:03.484 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:03.744 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:03.744 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:03.744 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:04.004 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:04.004 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:04.004 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:04.264 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:04.264 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:04.264 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:04.523 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:04.523 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:04.523 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:04.523 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:04.781 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:04.781 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:04.781 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:05.040 04:18:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:05.040 04:18:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:05.040 04:18:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.040 04:18:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.040 04:18:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.040 04:18:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:05.040 04:18:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.943 04:18:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.943 00:27:06.943 real 1m15.548s 00:27:06.943 user 7m43.333s 00:27:06.943 sys 0m18.948s 00:27:06.943 04:18:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:06.943 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:06.943 ************************************ 00:27:06.943 END TEST nvmf_dif 00:27:06.943 ************************************ 00:27:06.943 04:18:21 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:06.943 04:18:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:06.943 04:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.943 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:07.202 ************************************ 00:27:07.202 START TEST nvmf_abort_qd_sizes 00:27:07.202 ************************************ 00:27:07.202 04:18:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:07.202 * Looking for test storage... 00:27:07.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.202 04:18:21 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.202 04:18:21 -- nvmf/common.sh@7 -- # uname -s 00:27:07.202 04:18:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.202 04:18:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.202 04:18:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.202 04:18:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.202 04:18:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.202 04:18:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.202 04:18:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.202 04:18:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.202 04:18:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.202 04:18:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.202 04:18:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:07.202 04:18:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:07.202 04:18:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.202 04:18:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.202 04:18:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.202 04:18:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.202 04:18:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.202 04:18:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.202 04:18:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.202 04:18:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.202 04:18:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.202 04:18:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.202 04:18:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.202 04:18:21 -- paths/export.sh@5 -- # export PATH 00:27:07.202 04:18:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.202 04:18:21 -- nvmf/common.sh@47 -- # : 0 00:27:07.202 04:18:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.202 04:18:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.202 04:18:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.202 04:18:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.202 04:18:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.202 04:18:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.202 04:18:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.202 04:18:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.202 04:18:21 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:07.202 04:18:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:07.202 04:18:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.202 04:18:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:07.202 04:18:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:07.202 04:18:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:07.202 04:18:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.202 04:18:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:07.202 04:18:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.202 04:18:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:07.202 04:18:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:07.202 04:18:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.202 04:18:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.763 04:18:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:13.763 04:18:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.763 04:18:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.763 04:18:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.763 04:18:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.763 04:18:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.763 04:18:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.763 04:18:27 -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.763 04:18:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.763 04:18:27 -- nvmf/common.sh@296 -- # e810=() 00:27:13.763 04:18:27 -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.763 04:18:27 -- nvmf/common.sh@297 -- # x722=() 00:27:13.763 04:18:27 -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.763 04:18:27 -- nvmf/common.sh@298 -- # mlx=() 00:27:13.763 04:18:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.763 04:18:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.763 04:18:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.764 04:18:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.764 04:18:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:13.764 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:13.764 04:18:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.764 04:18:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:13.764 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:13.764 04:18:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.764 04:18:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.764 04:18:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.764 04:18:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:13.764 Found net devices under 0000:af:00.0: cvl_0_0 00:27:13.764 04:18:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.764 04:18:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.764 04:18:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.764 04:18:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:13.764 Found net devices under 0000:af:00.1: cvl_0_1 00:27:13.764 04:18:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:13.764 04:18:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:13.764 04:18:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:13.764 04:18:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.764 04:18:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.764 04:18:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.764 04:18:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.764 04:18:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.764 04:18:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.764 04:18:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.764 04:18:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.764 04:18:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.764 04:18:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.764 04:18:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.764 04:18:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.764 04:18:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.764 04:18:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.764 04:18:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.764 04:18:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.764 04:18:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.764 04:18:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.764 04:18:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:13.764 00:27:13.764 --- 10.0.0.2 ping statistics --- 00:27:13.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.764 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:13.764 04:18:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:27:13.764 00:27:13.764 --- 10.0.0.1 ping statistics --- 00:27:13.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.764 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:13.764 04:18:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.764 04:18:27 -- nvmf/common.sh@411 -- # return 0 00:27:13.764 04:18:27 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:13.764 04:18:27 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.668 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.668 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:16.611 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:16.611 04:18:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.611 04:18:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:16.611 04:18:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:16.611 04:18:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.611 04:18:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:16.611 04:18:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:16.611 04:18:31 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:16.611 04:18:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:16.611 04:18:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:16.611 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:27:16.611 04:18:31 -- nvmf/common.sh@470 -- # nvmfpid=4000510 00:27:16.611 04:18:31 -- nvmf/common.sh@471 -- # waitforlisten 4000510 00:27:16.611 04:18:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:16.611 04:18:31 -- common/autotest_common.sh@817 -- # '[' -z 4000510 ']' 00:27:16.611 04:18:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.611 04:18:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:16.611 04:18:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.611 04:18:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:16.611 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:27:16.921 [2024-04-19 04:18:31.147821] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:27:16.921 [2024-04-19 04:18:31.147875] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.921 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.921 [2024-04-19 04:18:31.227869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.921 [2024-04-19 04:18:31.320526] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.921 [2024-04-19 04:18:31.320570] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.921 [2024-04-19 04:18:31.320581] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.921 [2024-04-19 04:18:31.320592] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.921 [2024-04-19 04:18:31.320600] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.921 [2024-04-19 04:18:31.320660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.921 [2024-04-19 04:18:31.320834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.921 [2024-04-19 04:18:31.320928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.921 [2024-04-19 04:18:31.320928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.487 04:18:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:17.487 04:18:31 -- common/autotest_common.sh@850 -- # return 0 00:27:17.487 04:18:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:17.487 04:18:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:17.487 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.746 04:18:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:17.746 04:18:32 -- scripts/common.sh@309 -- # local bdf bdfs 00:27:17.746 04:18:32 -- scripts/common.sh@310 -- # local nvmes 00:27:17.746 04:18:32 -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:27:17.746 04:18:32 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:17.746 04:18:32 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:17.746 04:18:32 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:27:17.746 04:18:32 -- scripts/common.sh@320 -- # uname -s 00:27:17.746 04:18:32 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:17.746 04:18:32 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:17.746 04:18:32 -- scripts/common.sh@325 -- # (( 1 )) 00:27:17.746 04:18:32 -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:17.746 04:18:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:17.746 04:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.746 04:18:32 -- common/autotest_common.sh@10 -- # set +x 00:27:17.746 ************************************ 00:27:17.746 START TEST spdk_target_abort 00:27:17.746 ************************************ 00:27:17.746 04:18:32 -- common/autotest_common.sh@1111 -- # spdk_target 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:17.746 04:18:32 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:27:17.746 04:18:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.746 04:18:32 -- common/autotest_common.sh@10 -- # set +x 00:27:21.031 spdk_targetn1 00:27:21.031 04:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.031 04:18:35 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.031 04:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.031 04:18:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.031 [2024-04-19 04:18:35.041665] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.031 04:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.031 04:18:35 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:21.031 04:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.032 04:18:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.032 04:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:21.032 04:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.032 04:18:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.032 04:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:21.032 04:18:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:21.032 04:18:35 -- common/autotest_common.sh@10 -- # set +x 00:27:21.032 [2024-04-19 04:18:35.074545] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.032 04:18:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:21.032 04:18:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.032 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.318 Initializing NVMe Controllers 00:27:24.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:24.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:24.318 Initialization complete. Launching workers. 00:27:24.318 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14471, failed: 0 00:27:24.318 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1647, failed to submit 12824 00:27:24.318 success 767, unsuccess 880, failed 0 00:27:24.318 04:18:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:24.318 04:18:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:24.318 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.641 Initializing NVMe Controllers 00:27:27.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:27.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:27.641 Initialization complete. Launching workers. 00:27:27.641 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8639, failed: 0 00:27:27.641 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7411 00:27:27.641 success 331, unsuccess 897, failed 0 00:27:27.641 04:18:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:27.641 04:18:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:27.641 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.926 Initializing NVMe Controllers 00:27:30.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:30.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:30.926 Initialization complete. Launching workers. 00:27:30.926 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37698, failed: 0 00:27:30.926 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2661, failed to submit 35037 00:27:30.926 success 578, unsuccess 2083, failed 0 00:27:30.926 04:18:44 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:30.926 04:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.926 04:18:44 -- common/autotest_common.sh@10 -- # set +x 00:27:30.926 04:18:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.926 04:18:44 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:30.926 04:18:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.926 04:18:44 -- common/autotest_common.sh@10 -- # set +x 00:27:31.861 04:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.861 04:18:46 -- target/abort_qd_sizes.sh@61 -- # killprocess 4000510 00:27:31.861 04:18:46 -- common/autotest_common.sh@936 -- # '[' -z 4000510 ']' 00:27:31.861 04:18:46 -- common/autotest_common.sh@940 -- # kill -0 4000510 00:27:31.861 04:18:46 -- common/autotest_common.sh@941 -- # uname 00:27:31.861 04:18:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:31.861 04:18:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4000510 00:27:31.861 04:18:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:31.861 04:18:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:31.861 04:18:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4000510' 00:27:31.861 killing process with pid 4000510 00:27:31.861 04:18:46 -- common/autotest_common.sh@955 -- # kill 4000510 00:27:31.861 04:18:46 -- common/autotest_common.sh@960 -- # wait 4000510 00:27:32.120 00:27:32.120 real 0m14.351s 00:27:32.120 user 0m57.700s 00:27:32.120 sys 0m2.148s 00:27:32.120 04:18:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:32.120 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.120 ************************************ 00:27:32.120 END TEST spdk_target_abort 00:27:32.120 ************************************ 00:27:32.120 04:18:46 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:32.120 04:18:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:32.120 04:18:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:32.120 04:18:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.379 ************************************ 00:27:32.379 START TEST kernel_target_abort 00:27:32.379 ************************************ 00:27:32.379 04:18:46 -- common/autotest_common.sh@1111 -- # kernel_target 00:27:32.379 04:18:46 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:32.379 04:18:46 -- nvmf/common.sh@717 -- # local ip 00:27:32.379 04:18:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:32.379 04:18:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:32.379 04:18:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.379 04:18:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.379 04:18:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:32.379 04:18:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.379 04:18:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:32.379 04:18:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:32.379 04:18:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:32.379 04:18:46 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:32.379 04:18:46 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:32.379 04:18:46 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:32.379 04:18:46 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:32.379 04:18:46 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:32.379 04:18:46 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:32.379 04:18:46 -- nvmf/common.sh@628 -- # local block nvme 00:27:32.379 04:18:46 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:32.379 04:18:46 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:32.379 04:18:46 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:32.379 04:18:46 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:34.911 Waiting for block devices as requested 00:27:34.911 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:35.170 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:35.170 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:35.170 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:35.429 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:35.429 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:35.429 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:35.429 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:35.687 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:35.687 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:35.687 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:35.945 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:35.945 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:35.945 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:36.204 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:36.204 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:36.204 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:36.463 04:18:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:36.463 04:18:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:36.463 04:18:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:36.463 04:18:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:36.463 04:18:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:36.463 04:18:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:36.463 04:18:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:36.463 04:18:50 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:36.463 04:18:50 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:36.463 No valid GPT data, bailing 00:27:36.463 04:18:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:36.463 04:18:50 -- scripts/common.sh@391 -- # pt= 00:27:36.463 04:18:50 -- scripts/common.sh@392 -- # return 1 00:27:36.463 04:18:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:36.463 04:18:50 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:36.463 04:18:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:36.463 04:18:50 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:36.463 04:18:50 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:36.463 04:18:50 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:36.463 04:18:50 -- nvmf/common.sh@656 -- # echo 1 00:27:36.463 04:18:50 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:36.463 04:18:50 -- nvmf/common.sh@658 -- # echo 1 00:27:36.463 04:18:50 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:36.463 04:18:50 -- nvmf/common.sh@661 -- # echo tcp 00:27:36.463 04:18:50 -- nvmf/common.sh@662 -- # echo 4420 00:27:36.463 04:18:50 -- nvmf/common.sh@663 -- # echo ipv4 00:27:36.463 04:18:50 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:36.463 04:18:50 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:36.463 00:27:36.463 Discovery Log Number of Records 2, Generation counter 2 00:27:36.463 =====Discovery Log Entry 0====== 00:27:36.463 trtype: tcp 00:27:36.463 adrfam: ipv4 00:27:36.463 subtype: current discovery subsystem 00:27:36.463 treq: not specified, sq flow control disable supported 00:27:36.463 portid: 1 00:27:36.463 trsvcid: 4420 00:27:36.463 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:36.463 traddr: 10.0.0.1 00:27:36.463 eflags: none 00:27:36.463 sectype: none 00:27:36.463 =====Discovery Log Entry 1====== 00:27:36.463 trtype: tcp 00:27:36.463 adrfam: ipv4 00:27:36.463 subtype: nvme subsystem 00:27:36.463 treq: not specified, sq flow control disable supported 00:27:36.463 portid: 1 00:27:36.463 trsvcid: 4420 00:27:36.463 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:36.463 traddr: 10.0.0.1 00:27:36.463 eflags: none 00:27:36.463 sectype: none 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:36.463 04:18:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:36.463 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.784 Initializing NVMe Controllers 00:27:39.784 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:39.784 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:39.784 Initialization complete. Launching workers. 00:27:39.784 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49495, failed: 0 00:27:39.784 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 49495, failed to submit 0 00:27:39.784 success 0, unsuccess 49495, failed 0 00:27:39.784 04:18:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:39.784 04:18:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.784 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.079 Initializing NVMe Controllers 00:27:43.079 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:43.079 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:43.079 Initialization complete. Launching workers. 00:27:43.079 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83286, failed: 0 00:27:43.079 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20918, failed to submit 62368 00:27:43.079 success 0, unsuccess 20918, failed 0 00:27:43.079 04:18:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:43.079 04:18:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:43.079 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.364 Initializing NVMe Controllers 00:27:46.364 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:46.364 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:46.364 Initialization complete. Launching workers. 00:27:46.364 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80102, failed: 0 00:27:46.364 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19998, failed to submit 60104 00:27:46.364 success 0, unsuccess 19998, failed 0 00:27:46.364 04:19:00 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:46.364 04:19:00 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:46.364 04:19:00 -- nvmf/common.sh@675 -- # echo 0 00:27:46.364 04:19:00 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.364 04:19:00 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.365 04:19:00 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.365 04:19:00 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.365 04:19:00 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.365 04:19:00 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:46.365 04:19:00 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:48.895 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:48.895 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:49.829 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:27:49.829 00:27:49.829 real 0m17.442s 00:27:49.829 user 0m8.187s 00:27:49.829 sys 0m5.120s 00:27:49.829 04:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:49.829 04:19:04 -- common/autotest_common.sh@10 -- # set +x 00:27:49.829 ************************************ 00:27:49.829 END TEST kernel_target_abort 00:27:49.829 ************************************ 00:27:49.829 04:19:04 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:49.829 04:19:04 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:49.829 04:19:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:49.829 04:19:04 -- nvmf/common.sh@117 -- # sync 00:27:49.829 04:19:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.829 04:19:04 -- nvmf/common.sh@120 -- # set +e 00:27:49.829 04:19:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.829 04:19:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.829 rmmod nvme_tcp 00:27:49.829 rmmod nvme_fabrics 00:27:49.829 rmmod nvme_keyring 00:27:49.829 04:19:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.829 04:19:04 -- nvmf/common.sh@124 -- # set -e 00:27:49.829 04:19:04 -- nvmf/common.sh@125 -- # return 0 00:27:49.829 04:19:04 -- nvmf/common.sh@478 -- # '[' -n 4000510 ']' 00:27:49.829 04:19:04 -- nvmf/common.sh@479 -- # killprocess 4000510 00:27:49.829 04:19:04 -- common/autotest_common.sh@936 -- # '[' -z 4000510 ']' 00:27:49.829 04:19:04 -- common/autotest_common.sh@940 -- # kill -0 4000510 00:27:49.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (4000510) - No such process 00:27:49.829 04:19:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 4000510 is not found' 00:27:49.829 Process with pid 4000510 is not found 00:27:49.829 04:19:04 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:27:49.829 04:19:04 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:52.355 Waiting for block devices as requested 00:27:52.613 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:27:52.613 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:52.613 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:52.872 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:52.872 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:52.872 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:53.130 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:53.130 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:53.130 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:53.389 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:53.389 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:53.389 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:53.389 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:53.647 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:53.647 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:53.647 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:53.906 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:53.906 04:19:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:53.906 04:19:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:53.906 04:19:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.906 04:19:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.906 04:19:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.906 04:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:53.906 04:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.810 04:19:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.810 00:27:55.810 real 0m48.722s 00:27:55.810 user 1m10.077s 00:27:55.810 sys 0m15.688s 00:27:55.810 04:19:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:55.810 04:19:10 -- common/autotest_common.sh@10 -- # set +x 00:27:55.810 ************************************ 00:27:55.810 END TEST nvmf_abort_qd_sizes 00:27:55.810 ************************************ 00:27:56.069 04:19:10 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:56.069 04:19:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:56.069 04:19:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:56.069 04:19:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.069 ************************************ 00:27:56.069 START TEST keyring_file 00:27:56.069 ************************************ 00:27:56.069 04:19:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:56.069 * Looking for test storage... 00:27:56.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:56.069 04:19:10 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:56.069 04:19:10 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.069 04:19:10 -- nvmf/common.sh@7 -- # uname -s 00:27:56.069 04:19:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.069 04:19:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.070 04:19:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.070 04:19:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.070 04:19:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.070 04:19:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.070 04:19:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.070 04:19:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.070 04:19:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.070 04:19:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.070 04:19:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:56.070 04:19:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:56.070 04:19:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.070 04:19:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.070 04:19:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.070 04:19:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.070 04:19:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.070 04:19:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.070 04:19:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.070 04:19:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.070 04:19:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.070 04:19:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.070 04:19:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.070 04:19:10 -- paths/export.sh@5 -- # export PATH 00:27:56.070 04:19:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.070 04:19:10 -- nvmf/common.sh@47 -- # : 0 00:27:56.070 04:19:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.070 04:19:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.070 04:19:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.070 04:19:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.070 04:19:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.070 04:19:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.070 04:19:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.070 04:19:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.070 04:19:10 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:56.070 04:19:10 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:56.070 04:19:10 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:56.070 04:19:10 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:56.070 04:19:10 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:56.070 04:19:10 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:56.070 04:19:10 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:56.070 04:19:10 -- keyring/common.sh@15 -- # local name key digest path 00:27:56.070 04:19:10 -- keyring/common.sh@17 -- # name=key0 00:27:56.070 04:19:10 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:56.070 04:19:10 -- keyring/common.sh@17 -- # digest=0 00:27:56.070 04:19:10 -- keyring/common.sh@18 -- # mktemp 00:27:56.070 04:19:10 -- keyring/common.sh@18 -- # path=/tmp/tmp.nhH56dHGoc 00:27:56.070 04:19:10 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:56.070 04:19:10 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:56.070 04:19:10 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:56.070 04:19:10 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:56.070 04:19:10 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:27:56.070 04:19:10 -- nvmf/common.sh@693 -- # digest=0 00:27:56.070 04:19:10 -- nvmf/common.sh@694 -- # python - 00:27:56.329 04:19:10 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nhH56dHGoc 00:27:56.329 04:19:10 -- keyring/common.sh@23 -- # echo /tmp/tmp.nhH56dHGoc 00:27:56.329 04:19:10 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nhH56dHGoc 00:27:56.329 04:19:10 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:56.329 04:19:10 -- keyring/common.sh@15 -- # local name key digest path 00:27:56.329 04:19:10 -- keyring/common.sh@17 -- # name=key1 00:27:56.329 04:19:10 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:56.329 04:19:10 -- keyring/common.sh@17 -- # digest=0 00:27:56.329 04:19:10 -- keyring/common.sh@18 -- # mktemp 00:27:56.329 04:19:10 -- keyring/common.sh@18 -- # path=/tmp/tmp.VLcCnQje9w 00:27:56.329 04:19:10 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:56.329 04:19:10 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:56.329 04:19:10 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:56.329 04:19:10 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:56.329 04:19:10 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:27:56.329 04:19:10 -- nvmf/common.sh@693 -- # digest=0 00:27:56.329 04:19:10 -- nvmf/common.sh@694 -- # python - 00:27:56.329 04:19:10 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VLcCnQje9w 00:27:56.329 04:19:10 -- keyring/common.sh@23 -- # echo /tmp/tmp.VLcCnQje9w 00:27:56.329 04:19:10 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VLcCnQje9w 00:27:56.329 04:19:10 -- keyring/file.sh@30 -- # tgtpid=4010044 00:27:56.329 04:19:10 -- keyring/file.sh@32 -- # waitforlisten 4010044 00:27:56.329 04:19:10 -- common/autotest_common.sh@817 -- # '[' -z 4010044 ']' 00:27:56.329 04:19:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.329 04:19:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:56.329 04:19:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.329 04:19:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:56.329 04:19:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 04:19:10 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:56.329 [2024-04-19 04:19:10.753358] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:27:56.329 [2024-04-19 04:19:10.753424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010044 ] 00:27:56.329 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.329 [2024-04-19 04:19:10.834365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.589 [2024-04-19 04:19:10.924573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.156 04:19:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:57.156 04:19:11 -- common/autotest_common.sh@850 -- # return 0 00:27:57.156 04:19:11 -- keyring/file.sh@33 -- # rpc_cmd 00:27:57.156 04:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.156 04:19:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.156 [2024-04-19 04:19:11.676475] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.415 null0 00:27:57.415 [2024-04-19 04:19:11.708516] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:57.415 [2024-04-19 04:19:11.708888] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:57.415 [2024-04-19 04:19:11.716534] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:57.415 04:19:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.415 04:19:11 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:57.415 04:19:11 -- common/autotest_common.sh@638 -- # local es=0 00:27:57.415 04:19:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:57.415 04:19:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:57.415 04:19:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:57.415 04:19:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:57.415 04:19:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:57.415 04:19:11 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:57.415 04:19:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.415 04:19:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.415 [2024-04-19 04:19:11.728565] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:27:57.415 { 00:27:57.415 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.415 "secure_channel": false, 00:27:57.415 "listen_address": { 00:27:57.415 "trtype": "tcp", 00:27:57.415 "traddr": "127.0.0.1", 00:27:57.415 "trsvcid": "4420" 00:27:57.415 }, 00:27:57.415 "method": "nvmf_subsystem_add_listener", 00:27:57.415 "req_id": 1 00:27:57.415 } 00:27:57.415 Got JSON-RPC error response 00:27:57.415 response: 00:27:57.415 { 00:27:57.415 "code": -32602, 00:27:57.415 "message": "Invalid parameters" 00:27:57.415 } 00:27:57.415 04:19:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:57.415 04:19:11 -- common/autotest_common.sh@641 -- # es=1 00:27:57.415 04:19:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:57.415 04:19:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:57.415 04:19:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:57.415 04:19:11 -- keyring/file.sh@46 -- # bperfpid=4010228 00:27:57.415 04:19:11 -- keyring/file.sh@48 -- # waitforlisten 4010228 /var/tmp/bperf.sock 00:27:57.415 04:19:11 -- common/autotest_common.sh@817 -- # '[' -z 4010228 ']' 00:27:57.415 04:19:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:57.415 04:19:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:57.415 04:19:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:57.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:57.415 04:19:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:57.416 04:19:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.416 04:19:11 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:57.416 [2024-04-19 04:19:11.779503] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:27:57.416 [2024-04-19 04:19:11.779557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010228 ] 00:27:57.416 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.416 [2024-04-19 04:19:11.851866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.416 [2024-04-19 04:19:11.940996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.675 04:19:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:57.675 04:19:12 -- common/autotest_common.sh@850 -- # return 0 00:27:57.675 04:19:12 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:27:57.675 04:19:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:27:57.933 04:19:12 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VLcCnQje9w 00:27:57.933 04:19:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VLcCnQje9w 00:27:58.192 04:19:12 -- keyring/file.sh@51 -- # get_key key0 00:27:58.192 04:19:12 -- keyring/file.sh@51 -- # jq -r .path 00:27:58.192 04:19:12 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.192 04:19:12 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.192 04:19:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.450 04:19:12 -- keyring/file.sh@51 -- # [[ /tmp/tmp.nhH56dHGoc == \/\t\m\p\/\t\m\p\.\n\h\H\5\6\d\H\G\o\c ]] 00:27:58.450 04:19:12 -- keyring/file.sh@52 -- # get_key key1 00:27:58.450 04:19:12 -- keyring/file.sh@52 -- # jq -r .path 00:27:58.450 04:19:12 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.450 04:19:12 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:58.450 04:19:12 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.708 04:19:13 -- keyring/file.sh@52 -- # [[ /tmp/tmp.VLcCnQje9w == \/\t\m\p\/\t\m\p\.\V\L\c\C\n\Q\j\e\9\w ]] 00:27:58.708 04:19:13 -- keyring/file.sh@53 -- # get_refcnt key0 00:27:58.708 04:19:13 -- keyring/common.sh@12 -- # get_key key0 00:27:58.708 04:19:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.708 04:19:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.708 04:19:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.708 04:19:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.997 04:19:13 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:58.997 04:19:13 -- keyring/file.sh@54 -- # get_refcnt key1 00:27:58.997 04:19:13 -- keyring/common.sh@12 -- # get_key key1 00:27:58.997 04:19:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.997 04:19:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.997 04:19:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:58.997 04:19:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.997 04:19:13 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:58.997 04:19:13 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.997 04:19:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.255 [2024-04-19 04:19:13.644472] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:59.255 nvme0n1 00:27:59.255 04:19:13 -- keyring/file.sh@59 -- # get_refcnt key0 00:27:59.255 04:19:13 -- keyring/common.sh@12 -- # get_key key0 00:27:59.255 04:19:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:59.255 04:19:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:59.255 04:19:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:59.255 04:19:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.514 04:19:13 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:59.514 04:19:13 -- keyring/file.sh@60 -- # get_refcnt key1 00:27:59.514 04:19:13 -- keyring/common.sh@12 -- # get_key key1 00:27:59.514 04:19:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:59.514 04:19:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:59.514 04:19:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:59.514 04:19:13 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.772 04:19:14 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:59.772 04:19:14 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.030 Running I/O for 1 seconds... 00:28:00.965 00:28:00.965 Latency(us) 00:28:00.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.965 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:00.965 nvme0n1 : 1.01 9577.76 37.41 0.00 0.00 13312.05 7536.64 25141.99 00:28:00.965 =================================================================================================================== 00:28:00.965 Total : 9577.76 37.41 0.00 0.00 13312.05 7536.64 25141.99 00:28:00.965 0 00:28:00.965 04:19:15 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:00.965 04:19:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:01.223 04:19:15 -- keyring/file.sh@65 -- # get_refcnt key0 00:28:01.223 04:19:15 -- keyring/common.sh@12 -- # get_key key0 00:28:01.223 04:19:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.223 04:19:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.223 04:19:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:01.223 04:19:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.481 04:19:15 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:01.481 04:19:15 -- keyring/file.sh@66 -- # get_refcnt key1 00:28:01.481 04:19:15 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.481 04:19:15 -- keyring/common.sh@12 -- # get_key key1 00:28:01.481 04:19:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.481 04:19:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.481 04:19:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:01.739 04:19:16 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:01.739 04:19:16 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:01.739 04:19:16 -- common/autotest_common.sh@638 -- # local es=0 00:28:01.739 04:19:16 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:01.739 04:19:16 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:28:01.739 04:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:01.739 04:19:16 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:28:01.739 04:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:01.739 04:19:16 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:01.739 04:19:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:01.997 [2024-04-19 04:19:16.358166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:01.998 [2024-04-19 04:19:16.358628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c7f0 (107): Transport endpoint is not connected 00:28:01.998 [2024-04-19 04:19:16.359623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8c7f0 (9): Bad file descriptor 00:28:01.998 [2024-04-19 04:19:16.360621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:01.998 [2024-04-19 04:19:16.360636] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:01.998 [2024-04-19 04:19:16.360645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:01.998 request: 00:28:01.998 { 00:28:01.998 "name": "nvme0", 00:28:01.998 "trtype": "tcp", 00:28:01.998 "traddr": "127.0.0.1", 00:28:01.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.998 "adrfam": "ipv4", 00:28:01.998 "trsvcid": "4420", 00:28:01.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.998 "psk": "key1", 00:28:01.998 "method": "bdev_nvme_attach_controller", 00:28:01.998 "req_id": 1 00:28:01.998 } 00:28:01.998 Got JSON-RPC error response 00:28:01.998 response: 00:28:01.998 { 00:28:01.998 "code": -32602, 00:28:01.998 "message": "Invalid parameters" 00:28:01.998 } 00:28:01.998 04:19:16 -- common/autotest_common.sh@641 -- # es=1 00:28:01.998 04:19:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:01.998 04:19:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:01.998 04:19:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:01.998 04:19:16 -- keyring/file.sh@71 -- # get_refcnt key0 00:28:01.998 04:19:16 -- keyring/common.sh@12 -- # get_key key0 00:28:01.998 04:19:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.998 04:19:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.998 04:19:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:01.998 04:19:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.256 04:19:16 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:02.256 04:19:16 -- keyring/file.sh@72 -- # get_refcnt key1 00:28:02.256 04:19:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.256 04:19:16 -- keyring/common.sh@12 -- # get_key key1 00:28:02.256 04:19:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.256 04:19:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.256 04:19:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:02.514 04:19:16 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:02.514 04:19:16 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:02.514 04:19:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:02.772 04:19:17 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:02.772 04:19:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:02.773 04:19:17 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:02.773 04:19:17 -- keyring/file.sh@77 -- # jq length 00:28:02.773 04:19:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.031 04:19:17 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:03.031 04:19:17 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nhH56dHGoc 00:28:03.031 04:19:17 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.031 04:19:17 -- common/autotest_common.sh@638 -- # local es=0 00:28:03.031 04:19:17 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.031 04:19:17 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:28:03.031 04:19:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.031 04:19:17 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:28:03.031 04:19:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.031 04:19:17 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.031 04:19:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.289 [2024-04-19 04:19:17.755459] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nhH56dHGoc': 0100660 00:28:03.289 [2024-04-19 04:19:17.755488] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:03.289 request: 00:28:03.289 { 00:28:03.289 "name": "key0", 00:28:03.289 "path": "/tmp/tmp.nhH56dHGoc", 00:28:03.289 "method": "keyring_file_add_key", 00:28:03.289 "req_id": 1 00:28:03.289 } 00:28:03.289 Got JSON-RPC error response 00:28:03.289 response: 00:28:03.289 { 00:28:03.289 "code": -1, 00:28:03.289 "message": "Operation not permitted" 00:28:03.289 } 00:28:03.289 04:19:17 -- common/autotest_common.sh@641 -- # es=1 00:28:03.289 04:19:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:03.289 04:19:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:03.289 04:19:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:03.289 04:19:17 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nhH56dHGoc 00:28:03.289 04:19:17 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.289 04:19:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nhH56dHGoc 00:28:03.548 04:19:18 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nhH56dHGoc 00:28:03.548 04:19:18 -- keyring/file.sh@88 -- # get_refcnt key0 00:28:03.548 04:19:18 -- keyring/common.sh@12 -- # get_key key0 00:28:03.548 04:19:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:03.548 04:19:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.548 04:19:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:03.548 04:19:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.806 04:19:18 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:03.806 04:19:18 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:03.806 04:19:18 -- common/autotest_common.sh@638 -- # local es=0 00:28:03.806 04:19:18 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:03.806 04:19:18 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:28:03.806 04:19:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.807 04:19:18 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:28:03.807 04:19:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.807 04:19:18 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:03.807 04:19:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:04.065 [2024-04-19 04:19:18.481393] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nhH56dHGoc': No such file or directory 00:28:04.065 [2024-04-19 04:19:18.481418] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:04.065 [2024-04-19 04:19:18.481447] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:04.065 [2024-04-19 04:19:18.481455] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:04.065 [2024-04-19 04:19:18.481463] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:04.065 request: 00:28:04.065 { 00:28:04.065 "name": "nvme0", 00:28:04.065 "trtype": "tcp", 00:28:04.065 "traddr": "127.0.0.1", 00:28:04.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.065 "adrfam": "ipv4", 00:28:04.065 "trsvcid": "4420", 00:28:04.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.065 "psk": "key0", 00:28:04.065 "method": "bdev_nvme_attach_controller", 00:28:04.065 "req_id": 1 00:28:04.065 } 00:28:04.065 Got JSON-RPC error response 00:28:04.065 response: 00:28:04.065 { 00:28:04.065 "code": -19, 00:28:04.065 "message": "No such device" 00:28:04.065 } 00:28:04.065 04:19:18 -- common/autotest_common.sh@641 -- # es=1 00:28:04.065 04:19:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:04.065 04:19:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:04.065 04:19:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:04.065 04:19:18 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:04.065 04:19:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:04.324 04:19:18 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:04.324 04:19:18 -- keyring/common.sh@15 -- # local name key digest path 00:28:04.324 04:19:18 -- keyring/common.sh@17 -- # name=key0 00:28:04.324 04:19:18 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:04.324 04:19:18 -- keyring/common.sh@17 -- # digest=0 00:28:04.324 04:19:18 -- keyring/common.sh@18 -- # mktemp 00:28:04.324 04:19:18 -- keyring/common.sh@18 -- # path=/tmp/tmp.lylXlrp9b3 00:28:04.324 04:19:18 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:04.324 04:19:18 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:04.324 04:19:18 -- nvmf/common.sh@691 -- # local prefix key digest 00:28:04.324 04:19:18 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:28:04.324 04:19:18 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:28:04.324 04:19:18 -- nvmf/common.sh@693 -- # digest=0 00:28:04.324 04:19:18 -- nvmf/common.sh@694 -- # python - 00:28:04.324 04:19:18 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lylXlrp9b3 00:28:04.324 04:19:18 -- keyring/common.sh@23 -- # echo /tmp/tmp.lylXlrp9b3 00:28:04.324 04:19:18 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.lylXlrp9b3 00:28:04.324 04:19:18 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lylXlrp9b3 00:28:04.324 04:19:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lylXlrp9b3 00:28:04.583 04:19:19 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:04.583 04:19:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:04.842 nvme0n1 00:28:04.842 04:19:19 -- keyring/file.sh@99 -- # get_refcnt key0 00:28:04.842 04:19:19 -- keyring/common.sh@12 -- # get_key key0 00:28:04.842 04:19:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.842 04:19:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.842 04:19:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.842 04:19:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.101 04:19:19 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:05.101 04:19:19 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:05.101 04:19:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:05.359 04:19:19 -- keyring/file.sh@101 -- # get_key key0 00:28:05.359 04:19:19 -- keyring/file.sh@101 -- # jq -r .removed 00:28:05.359 04:19:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:05.359 04:19:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.359 04:19:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.618 04:19:20 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:05.618 04:19:20 -- keyring/file.sh@102 -- # get_refcnt key0 00:28:05.618 04:19:20 -- keyring/common.sh@12 -- # get_key key0 00:28:05.618 04:19:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:05.618 04:19:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.618 04:19:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:05.618 04:19:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.876 04:19:20 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:05.876 04:19:20 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:05.876 04:19:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:06.134 04:19:20 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:06.134 04:19:20 -- keyring/file.sh@104 -- # jq length 00:28:06.134 04:19:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.392 04:19:20 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:06.392 04:19:20 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lylXlrp9b3 00:28:06.392 04:19:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lylXlrp9b3 00:28:06.651 04:19:20 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VLcCnQje9w 00:28:06.651 04:19:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VLcCnQje9w 00:28:06.909 04:19:21 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:06.909 04:19:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.168 nvme0n1 00:28:07.168 04:19:21 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:07.168 04:19:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:07.427 04:19:21 -- keyring/file.sh@112 -- # config='{ 00:28:07.427 "subsystems": [ 00:28:07.427 { 00:28:07.427 "subsystem": "keyring", 00:28:07.427 "config": [ 00:28:07.427 { 00:28:07.427 "method": "keyring_file_add_key", 00:28:07.427 "params": { 00:28:07.427 "name": "key0", 00:28:07.427 "path": "/tmp/tmp.lylXlrp9b3" 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "keyring_file_add_key", 00:28:07.427 "params": { 00:28:07.427 "name": "key1", 00:28:07.427 "path": "/tmp/tmp.VLcCnQje9w" 00:28:07.427 } 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "iobuf", 00:28:07.427 "config": [ 00:28:07.427 { 00:28:07.427 "method": "iobuf_set_options", 00:28:07.427 "params": { 00:28:07.427 "small_pool_count": 8192, 00:28:07.427 "large_pool_count": 1024, 00:28:07.427 "small_bufsize": 8192, 00:28:07.427 "large_bufsize": 135168 00:28:07.427 } 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "sock", 00:28:07.427 "config": [ 00:28:07.427 { 00:28:07.427 "method": "sock_impl_set_options", 00:28:07.427 "params": { 00:28:07.427 "impl_name": "posix", 00:28:07.427 "recv_buf_size": 2097152, 00:28:07.427 "send_buf_size": 2097152, 00:28:07.427 "enable_recv_pipe": true, 00:28:07.427 "enable_quickack": false, 00:28:07.427 "enable_placement_id": 0, 00:28:07.427 "enable_zerocopy_send_server": true, 00:28:07.427 "enable_zerocopy_send_client": false, 00:28:07.427 "zerocopy_threshold": 0, 00:28:07.427 "tls_version": 0, 00:28:07.427 "enable_ktls": false 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "sock_impl_set_options", 00:28:07.427 "params": { 00:28:07.427 "impl_name": "ssl", 00:28:07.427 "recv_buf_size": 4096, 00:28:07.427 "send_buf_size": 4096, 00:28:07.427 "enable_recv_pipe": true, 00:28:07.427 "enable_quickack": false, 00:28:07.427 "enable_placement_id": 0, 00:28:07.427 "enable_zerocopy_send_server": true, 00:28:07.427 "enable_zerocopy_send_client": false, 00:28:07.427 "zerocopy_threshold": 0, 00:28:07.427 "tls_version": 0, 00:28:07.427 "enable_ktls": false 00:28:07.427 } 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "vmd", 00:28:07.427 "config": [] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "accel", 00:28:07.427 "config": [ 00:28:07.427 { 00:28:07.427 "method": "accel_set_options", 00:28:07.427 "params": { 00:28:07.427 "small_cache_size": 128, 00:28:07.427 "large_cache_size": 16, 00:28:07.427 "task_count": 2048, 00:28:07.427 "sequence_count": 2048, 00:28:07.427 "buf_count": 2048 00:28:07.427 } 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "bdev", 00:28:07.427 "config": [ 00:28:07.427 { 00:28:07.427 "method": "bdev_set_options", 00:28:07.427 "params": { 00:28:07.427 "bdev_io_pool_size": 65535, 00:28:07.427 "bdev_io_cache_size": 256, 00:28:07.427 "bdev_auto_examine": true, 00:28:07.427 "iobuf_small_cache_size": 128, 00:28:07.427 "iobuf_large_cache_size": 16 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_raid_set_options", 00:28:07.427 "params": { 00:28:07.427 "process_window_size_kb": 1024 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_iscsi_set_options", 00:28:07.427 "params": { 00:28:07.427 "timeout_sec": 30 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_nvme_set_options", 00:28:07.427 "params": { 00:28:07.427 "action_on_timeout": "none", 00:28:07.427 "timeout_us": 0, 00:28:07.427 "timeout_admin_us": 0, 00:28:07.427 "keep_alive_timeout_ms": 10000, 00:28:07.427 "arbitration_burst": 0, 00:28:07.427 "low_priority_weight": 0, 00:28:07.427 "medium_priority_weight": 0, 00:28:07.427 "high_priority_weight": 0, 00:28:07.427 "nvme_adminq_poll_period_us": 10000, 00:28:07.427 "nvme_ioq_poll_period_us": 0, 00:28:07.427 "io_queue_requests": 512, 00:28:07.427 "delay_cmd_submit": true, 00:28:07.427 "transport_retry_count": 4, 00:28:07.427 "bdev_retry_count": 3, 00:28:07.427 "transport_ack_timeout": 0, 00:28:07.427 "ctrlr_loss_timeout_sec": 0, 00:28:07.427 "reconnect_delay_sec": 0, 00:28:07.427 "fast_io_fail_timeout_sec": 0, 00:28:07.427 "disable_auto_failback": false, 00:28:07.427 "generate_uuids": false, 00:28:07.427 "transport_tos": 0, 00:28:07.427 "nvme_error_stat": false, 00:28:07.427 "rdma_srq_size": 0, 00:28:07.427 "io_path_stat": false, 00:28:07.427 "allow_accel_sequence": false, 00:28:07.427 "rdma_max_cq_size": 0, 00:28:07.427 "rdma_cm_event_timeout_ms": 0, 00:28:07.427 "dhchap_digests": [ 00:28:07.427 "sha256", 00:28:07.427 "sha384", 00:28:07.427 "sha512" 00:28:07.427 ], 00:28:07.427 "dhchap_dhgroups": [ 00:28:07.427 "null", 00:28:07.427 "ffdhe2048", 00:28:07.427 "ffdhe3072", 00:28:07.427 "ffdhe4096", 00:28:07.427 "ffdhe6144", 00:28:07.427 "ffdhe8192" 00:28:07.427 ] 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_nvme_attach_controller", 00:28:07.427 "params": { 00:28:07.427 "name": "nvme0", 00:28:07.427 "trtype": "TCP", 00:28:07.427 "adrfam": "IPv4", 00:28:07.427 "traddr": "127.0.0.1", 00:28:07.427 "trsvcid": "4420", 00:28:07.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.427 "prchk_reftag": false, 00:28:07.427 "prchk_guard": false, 00:28:07.427 "ctrlr_loss_timeout_sec": 0, 00:28:07.427 "reconnect_delay_sec": 0, 00:28:07.427 "fast_io_fail_timeout_sec": 0, 00:28:07.427 "psk": "key0", 00:28:07.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.427 "hdgst": false, 00:28:07.427 "ddgst": false 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_nvme_set_hotplug", 00:28:07.427 "params": { 00:28:07.427 "period_us": 100000, 00:28:07.427 "enable": false 00:28:07.427 } 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "method": "bdev_wait_for_examine" 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }, 00:28:07.427 { 00:28:07.427 "subsystem": "nbd", 00:28:07.427 "config": [] 00:28:07.427 } 00:28:07.427 ] 00:28:07.427 }' 00:28:07.427 04:19:21 -- keyring/file.sh@114 -- # killprocess 4010228 00:28:07.427 04:19:21 -- common/autotest_common.sh@936 -- # '[' -z 4010228 ']' 00:28:07.427 04:19:21 -- common/autotest_common.sh@940 -- # kill -0 4010228 00:28:07.427 04:19:21 -- common/autotest_common.sh@941 -- # uname 00:28:07.427 04:19:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.427 04:19:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4010228 00:28:07.427 04:19:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:07.427 04:19:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:07.427 04:19:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4010228' 00:28:07.427 killing process with pid 4010228 00:28:07.427 04:19:21 -- common/autotest_common.sh@955 -- # kill 4010228 00:28:07.427 Received shutdown signal, test time was about 1.000000 seconds 00:28:07.427 00:28:07.427 Latency(us) 00:28:07.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.427 =================================================================================================================== 00:28:07.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.427 04:19:21 -- common/autotest_common.sh@960 -- # wait 4010228 00:28:07.687 04:19:22 -- keyring/file.sh@117 -- # bperfpid=4012200 00:28:07.687 04:19:22 -- keyring/file.sh@119 -- # waitforlisten 4012200 /var/tmp/bperf.sock 00:28:07.687 04:19:22 -- common/autotest_common.sh@817 -- # '[' -z 4012200 ']' 00:28:07.687 04:19:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:07.687 04:19:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:07.687 04:19:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:07.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:07.687 04:19:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:07.687 04:19:22 -- common/autotest_common.sh@10 -- # set +x 00:28:07.687 04:19:22 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:07.687 04:19:22 -- keyring/file.sh@115 -- # echo '{ 00:28:07.687 "subsystems": [ 00:28:07.687 { 00:28:07.687 "subsystem": "keyring", 00:28:07.687 "config": [ 00:28:07.687 { 00:28:07.687 "method": "keyring_file_add_key", 00:28:07.687 "params": { 00:28:07.687 "name": "key0", 00:28:07.687 "path": "/tmp/tmp.lylXlrp9b3" 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "keyring_file_add_key", 00:28:07.687 "params": { 00:28:07.687 "name": "key1", 00:28:07.687 "path": "/tmp/tmp.VLcCnQje9w" 00:28:07.687 } 00:28:07.687 } 00:28:07.687 ] 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "subsystem": "iobuf", 00:28:07.687 "config": [ 00:28:07.687 { 00:28:07.687 "method": "iobuf_set_options", 00:28:07.687 "params": { 00:28:07.687 "small_pool_count": 8192, 00:28:07.687 "large_pool_count": 1024, 00:28:07.687 "small_bufsize": 8192, 00:28:07.687 "large_bufsize": 135168 00:28:07.687 } 00:28:07.687 } 00:28:07.687 ] 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "subsystem": "sock", 00:28:07.687 "config": [ 00:28:07.687 { 00:28:07.687 "method": "sock_impl_set_options", 00:28:07.687 "params": { 00:28:07.687 "impl_name": "posix", 00:28:07.687 "recv_buf_size": 2097152, 00:28:07.687 "send_buf_size": 2097152, 00:28:07.687 "enable_recv_pipe": true, 00:28:07.687 "enable_quickack": false, 00:28:07.687 "enable_placement_id": 0, 00:28:07.687 "enable_zerocopy_send_server": true, 00:28:07.687 "enable_zerocopy_send_client": false, 00:28:07.687 "zerocopy_threshold": 0, 00:28:07.687 "tls_version": 0, 00:28:07.687 "enable_ktls": false 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "sock_impl_set_options", 00:28:07.687 "params": { 00:28:07.687 "impl_name": "ssl", 00:28:07.687 "recv_buf_size": 4096, 00:28:07.687 "send_buf_size": 4096, 00:28:07.687 "enable_recv_pipe": true, 00:28:07.687 "enable_quickack": false, 00:28:07.687 "enable_placement_id": 0, 00:28:07.687 "enable_zerocopy_send_server": true, 00:28:07.687 "enable_zerocopy_send_client": false, 00:28:07.687 "zerocopy_threshold": 0, 00:28:07.687 "tls_version": 0, 00:28:07.687 "enable_ktls": false 00:28:07.687 } 00:28:07.687 } 00:28:07.687 ] 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "subsystem": "vmd", 00:28:07.687 "config": [] 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "subsystem": "accel", 00:28:07.687 "config": [ 00:28:07.687 { 00:28:07.687 "method": "accel_set_options", 00:28:07.687 "params": { 00:28:07.687 "small_cache_size": 128, 00:28:07.687 "large_cache_size": 16, 00:28:07.687 "task_count": 2048, 00:28:07.687 "sequence_count": 2048, 00:28:07.687 "buf_count": 2048 00:28:07.687 } 00:28:07.687 } 00:28:07.687 ] 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "subsystem": "bdev", 00:28:07.687 "config": [ 00:28:07.687 { 00:28:07.687 "method": "bdev_set_options", 00:28:07.687 "params": { 00:28:07.687 "bdev_io_pool_size": 65535, 00:28:07.687 "bdev_io_cache_size": 256, 00:28:07.687 "bdev_auto_examine": true, 00:28:07.687 "iobuf_small_cache_size": 128, 00:28:07.687 "iobuf_large_cache_size": 16 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "bdev_raid_set_options", 00:28:07.687 "params": { 00:28:07.687 "process_window_size_kb": 1024 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "bdev_iscsi_set_options", 00:28:07.687 "params": { 00:28:07.687 "timeout_sec": 30 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "bdev_nvme_set_options", 00:28:07.687 "params": { 00:28:07.687 "action_on_timeout": "none", 00:28:07.687 "timeout_us": 0, 00:28:07.687 "timeout_admin_us": 0, 00:28:07.687 "keep_alive_timeout_ms": 10000, 00:28:07.687 "arbitration_burst": 0, 00:28:07.687 "low_priority_weight": 0, 00:28:07.687 "medium_priority_weight": 0, 00:28:07.687 "high_priority_weight": 0, 00:28:07.687 "nvme_adminq_poll_period_us": 10000, 00:28:07.687 "nvme_ioq_poll_period_us": 0, 00:28:07.687 "io_queue_requests": 512, 00:28:07.687 "delay_cmd_submit": true, 00:28:07.687 "transport_retry_count": 4, 00:28:07.687 "bdev_retry_count": 3, 00:28:07.687 "transport_ack_timeout": 0, 00:28:07.687 "ctrlr_loss_timeout_sec": 0, 00:28:07.687 "reconnect_delay_sec": 0, 00:28:07.687 "fast_io_fail_timeout_sec": 0, 00:28:07.687 "disable_auto_failback": false, 00:28:07.687 "generate_uuids": false, 00:28:07.687 "transport_tos": 0, 00:28:07.687 "nvme_error_stat": false, 00:28:07.687 "rdma_srq_size": 0, 00:28:07.687 "io_path_stat": false, 00:28:07.687 "allow_accel_sequence": false, 00:28:07.687 "rdma_max_cq_size": 0, 00:28:07.687 "rdma_cm_event_timeout_ms": 0, 00:28:07.687 "dhchap_digests": [ 00:28:07.687 "sha256", 00:28:07.687 "sha384", 00:28:07.687 "sha512" 00:28:07.687 ], 00:28:07.687 "dhchap_dhgroups": [ 00:28:07.687 "null", 00:28:07.687 "ffdhe2048", 00:28:07.687 "ffdhe3072", 00:28:07.687 "ffdhe4096", 00:28:07.687 "ffdhe6144", 00:28:07.687 "ffdhe8192" 00:28:07.687 ] 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "bdev_nvme_attach_controller", 00:28:07.687 "params": { 00:28:07.687 "name": "nvme0", 00:28:07.687 "trtype": "TCP", 00:28:07.687 "adrfam": "IPv4", 00:28:07.687 "traddr": "127.0.0.1", 00:28:07.687 "trsvcid": "4420", 00:28:07.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.687 "prchk_reftag": false, 00:28:07.687 "prchk_guard": false, 00:28:07.687 "ctrlr_loss_timeout_sec": 0, 00:28:07.687 "reconnect_delay_sec": 0, 00:28:07.687 "fast_io_fail_timeout_sec": 0, 00:28:07.687 "psk": "key0", 00:28:07.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.687 "hdgst": false, 00:28:07.687 "ddgst": false 00:28:07.687 } 00:28:07.687 }, 00:28:07.687 { 00:28:07.687 "method": "bdev_nvme_set_hotplug", 00:28:07.687 "params": { 00:28:07.687 "period_us": 100000, 00:28:07.687 "enable": false 00:28:07.687 } 00:28:07.688 }, 00:28:07.688 { 00:28:07.688 "method": "bdev_wait_for_examine" 00:28:07.688 } 00:28:07.688 ] 00:28:07.688 }, 00:28:07.688 { 00:28:07.688 "subsystem": "nbd", 00:28:07.688 "config": [] 00:28:07.688 } 00:28:07.688 ] 00:28:07.688 }' 00:28:07.688 [2024-04-19 04:19:22.144375] Starting SPDK v24.05-pre git sha1 77a84e60e / DPDK 23.11.0 initialization... 00:28:07.688 [2024-04-19 04:19:22.144435] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012200 ] 00:28:07.688 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.946 [2024-04-19 04:19:22.217016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.946 [2024-04-19 04:19:22.304452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.946 [2024-04-19 04:19:22.461043] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:08.880 04:19:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:08.880 04:19:23 -- common/autotest_common.sh@850 -- # return 0 00:28:08.880 04:19:23 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:08.880 04:19:23 -- keyring/file.sh@120 -- # jq length 00:28:08.880 04:19:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.881 04:19:23 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:08.881 04:19:23 -- keyring/file.sh@121 -- # get_refcnt key0 00:28:08.881 04:19:23 -- keyring/common.sh@12 -- # get_key key0 00:28:08.881 04:19:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.881 04:19:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.881 04:19:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.881 04:19:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.138 04:19:23 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:09.138 04:19:23 -- keyring/file.sh@122 -- # get_refcnt key1 00:28:09.138 04:19:23 -- keyring/common.sh@12 -- # get_key key1 00:28:09.138 04:19:23 -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.138 04:19:23 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.138 04:19:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.138 04:19:23 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:09.396 04:19:23 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:09.396 04:19:23 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:09.396 04:19:23 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:09.396 04:19:23 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:09.655 04:19:24 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:09.655 04:19:24 -- keyring/file.sh@1 -- # cleanup 00:28:09.655 04:19:24 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.lylXlrp9b3 /tmp/tmp.VLcCnQje9w 00:28:09.655 04:19:24 -- keyring/file.sh@20 -- # killprocess 4012200 00:28:09.655 04:19:24 -- common/autotest_common.sh@936 -- # '[' -z 4012200 ']' 00:28:09.655 04:19:24 -- common/autotest_common.sh@940 -- # kill -0 4012200 00:28:09.655 04:19:24 -- common/autotest_common.sh@941 -- # uname 00:28:09.655 04:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.655 04:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4012200 00:28:09.655 04:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:09.655 04:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:09.655 04:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4012200' 00:28:09.655 killing process with pid 4012200 00:28:09.655 04:19:24 -- common/autotest_common.sh@955 -- # kill 4012200 00:28:09.655 Received shutdown signal, test time was about 1.000000 seconds 00:28:09.655 00:28:09.655 Latency(us) 00:28:09.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.655 =================================================================================================================== 00:28:09.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:09.655 04:19:24 -- common/autotest_common.sh@960 -- # wait 4012200 00:28:09.914 04:19:24 -- keyring/file.sh@21 -- # killprocess 4010044 00:28:09.914 04:19:24 -- common/autotest_common.sh@936 -- # '[' -z 4010044 ']' 00:28:09.914 04:19:24 -- common/autotest_common.sh@940 -- # kill -0 4010044 00:28:09.914 04:19:24 -- common/autotest_common.sh@941 -- # uname 00:28:09.914 04:19:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.914 04:19:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 4010044 00:28:09.914 04:19:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:09.914 04:19:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:09.914 04:19:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 4010044' 00:28:09.914 killing process with pid 4010044 00:28:09.914 04:19:24 -- common/autotest_common.sh@955 -- # kill 4010044 00:28:09.914 [2024-04-19 04:19:24.374714] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:09.914 04:19:24 -- common/autotest_common.sh@960 -- # wait 4010044 00:28:10.480 00:28:10.480 real 0m14.273s 00:28:10.480 user 0m35.037s 00:28:10.480 sys 0m3.123s 00:28:10.480 04:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:10.480 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.480 ************************************ 00:28:10.480 END TEST keyring_file 00:28:10.480 ************************************ 00:28:10.480 04:19:24 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:28:10.480 04:19:24 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:28:10.480 04:19:24 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:10.480 04:19:24 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:10.480 04:19:24 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:28:10.480 04:19:24 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:28:10.480 04:19:24 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:28:10.480 04:19:24 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:28:10.480 04:19:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:10.480 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:28:10.480 04:19:24 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:28:10.480 04:19:24 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:28:10.480 04:19:24 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:28:10.480 04:19:24 -- common/autotest_common.sh@10 -- # set +x 00:28:15.754 INFO: APP EXITING 00:28:15.754 INFO: killing all VMs 00:28:15.754 INFO: killing vhost app 00:28:15.754 INFO: EXIT DONE 00:28:18.320 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:28:18.320 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:28:18.583 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:28:18.842 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:28:21.374 Cleaning 00:28:21.374 Removing: /var/run/dpdk/spdk0/config 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:21.374 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:21.374 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:21.632 Removing: /var/run/dpdk/spdk1/config 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:21.632 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:21.632 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:21.632 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:21.632 Removing: /var/run/dpdk/spdk2/config 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:21.632 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:21.632 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:21.632 Removing: /var/run/dpdk/spdk3/config 00:28:21.632 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:21.632 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:21.633 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:21.633 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:21.633 Removing: /var/run/dpdk/spdk4/config 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:21.633 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:21.633 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:21.633 Removing: /dev/shm/bdev_svc_trace.1 00:28:21.633 Removing: /dev/shm/nvmf_trace.0 00:28:21.633 Removing: /dev/shm/spdk_tgt_trace.pid3632437 00:28:21.633 Removing: /var/run/dpdk/spdk0 00:28:21.633 Removing: /var/run/dpdk/spdk1 00:28:21.633 Removing: /var/run/dpdk/spdk2 00:28:21.633 Removing: /var/run/dpdk/spdk3 00:28:21.633 Removing: /var/run/dpdk/spdk4 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3629723 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3630959 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3632437 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3633183 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3634266 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3634533 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3635648 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3635663 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3636045 00:28:21.633 Removing: /var/run/dpdk/spdk_pid3637996 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3639307 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3639751 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3640090 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3640443 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3640769 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3641056 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3641352 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3641730 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3642778 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3646419 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3646724 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3647017 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3647270 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3647813 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3647857 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3648500 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3648813 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3649120 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3649309 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3649546 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3649861 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3650719 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3651012 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3651339 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3651647 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3651779 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3652020 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3652305 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3652598 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3652934 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3653302 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3653689 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3654000 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3654295 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3654580 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3654872 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3655158 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3655450 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3655751 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3656113 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3656485 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3656855 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3657146 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3657436 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3657731 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3658021 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3658312 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3658638 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3658996 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3663119 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3710656 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3715205 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3724798 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3730399 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3734633 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3735208 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3747735 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3747832 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3748761 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3749889 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3751085 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3751887 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3751892 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3752160 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3752311 00:28:21.891 Removing: /var/run/dpdk/spdk_pid3752419 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3753232 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3754253 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3755301 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3755831 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3755838 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3756110 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3757513 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3758823 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3767577 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3767920 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3772685 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3779092 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3782078 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3793049 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3802874 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3804770 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3805779 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3824186 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3828327 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3833177 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3835004 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3836850 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3837115 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3837384 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3837406 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3837975 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3840076 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3840927 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3841488 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3844405 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3845211 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3845908 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3850363 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3860880 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3865206 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3871481 00:28:22.150 Removing: /var/run/dpdk/spdk_pid3872944 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3874698 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3879296 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3883568 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3891393 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3891395 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3896490 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3896832 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3897256 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3897790 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3897802 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3902548 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3903138 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3907655 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3910805 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3916674 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3922122 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3929447 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3929449 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3949958 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3950632 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3951290 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3951955 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3952754 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3953472 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3954042 00:28:22.151 Removing: /var/run/dpdk/spdk_pid3954804 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3959103 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3959377 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3965759 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3966055 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3968567 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3976830 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3976835 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3982173 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3984425 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3986662 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3987856 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3990100 00:28:22.409 Removing: /var/run/dpdk/spdk_pid3991498 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4001273 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4001805 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4002339 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4004891 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4005458 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4006020 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4010044 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4010228 00:28:22.409 Removing: /var/run/dpdk/spdk_pid4012200 00:28:22.409 Clean 00:28:22.409 04:19:36 -- common/autotest_common.sh@1437 -- # return 0 00:28:22.409 04:19:36 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:28:22.409 04:19:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:22.409 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:28:22.668 04:19:36 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:28:22.668 04:19:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:22.668 04:19:36 -- common/autotest_common.sh@10 -- # set +x 00:28:22.668 04:19:36 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:22.668 04:19:36 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:22.668 04:19:36 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:22.668 04:19:37 -- spdk/autotest.sh@389 -- # hash lcov 00:28:22.668 04:19:37 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:22.668 04:19:37 -- spdk/autotest.sh@391 -- # hostname 00:28:22.668 04:19:37 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:22.668 geninfo: WARNING: invalid characters removed from testname! 00:28:54.748 04:20:05 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:54.748 04:20:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:58.036 04:20:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:00.568 04:20:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:03.127 04:20:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:06.410 04:20:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:08.988 04:20:23 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:08.988 04:20:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.988 04:20:23 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:08.988 04:20:23 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.988 04:20:23 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.988 04:20:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.988 04:20:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.988 04:20:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.988 04:20:23 -- paths/export.sh@5 -- $ export PATH 00:29:08.988 04:20:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.988 04:20:23 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:08.988 04:20:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:29:08.988 04:20:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713493223.XXXXXX 00:29:08.988 04:20:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713493223.8p9GIY 00:29:08.988 04:20:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:29:08.988 04:20:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:29:08.988 04:20:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:08.988 04:20:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:08.989 04:20:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:08.989 04:20:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:29:08.989 04:20:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:29:08.989 04:20:23 -- common/autotest_common.sh@10 -- $ set +x 00:29:08.989 04:20:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:08.989 04:20:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:29:08.989 04:20:23 -- pm/common@17 -- $ local monitor 00:29:08.989 04:20:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.989 04:20:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4022575 00:29:08.989 04:20:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.989 04:20:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4022577 00:29:08.989 04:20:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.989 04:20:23 -- pm/common@21 -- $ date +%s 00:29:08.989 04:20:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4022579 00:29:08.989 04:20:23 -- pm/common@21 -- $ date +%s 00:29:08.989 04:20:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.989 04:20:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=4022583 00:29:08.989 04:20:23 -- pm/common@26 -- $ sleep 1 00:29:08.989 04:20:23 -- pm/common@21 -- $ date +%s 00:29:08.989 04:20:23 -- pm/common@21 -- $ date +%s 00:29:08.989 04:20:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713493223 00:29:08.989 04:20:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713493223 00:29:08.989 04:20:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713493223 00:29:08.989 04:20:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713493223 00:29:08.989 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713493223_collect-cpu-load.pm.log 00:29:08.989 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713493223_collect-vmstat.pm.log 00:29:08.989 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713493223_collect-bmc-pm.bmc.pm.log 00:29:08.989 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713493223_collect-cpu-temp.pm.log 00:29:09.954 04:20:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:29:09.954 04:20:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:29:09.954 04:20:24 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:09.954 04:20:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:09.955 04:20:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:09.955 04:20:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:09.955 04:20:24 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:09.955 04:20:24 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:09.955 04:20:24 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:09.955 04:20:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:09.955 04:20:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:09.955 04:20:24 -- pm/common@30 -- $ signal_monitor_resources TERM 00:29:09.955 04:20:24 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:29:09.955 04:20:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:09.955 04:20:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:09.955 04:20:24 -- pm/common@45 -- $ pid=4022589 00:29:09.955 04:20:24 -- pm/common@52 -- $ sudo kill -TERM 4022589 00:29:09.955 04:20:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:09.955 04:20:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:09.955 04:20:24 -- pm/common@45 -- $ pid=4022592 00:29:09.955 04:20:24 -- pm/common@52 -- $ sudo kill -TERM 4022592 00:29:09.955 04:20:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:09.955 04:20:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:09.955 04:20:24 -- pm/common@45 -- $ pid=4022595 00:29:09.955 04:20:24 -- pm/common@52 -- $ sudo kill -TERM 4022595 00:29:10.213 04:20:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.213 04:20:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:10.213 04:20:24 -- pm/common@45 -- $ pid=4022594 00:29:10.213 04:20:24 -- pm/common@52 -- $ sudo kill -TERM 4022594 00:29:10.213 + [[ -n 3518987 ]] 00:29:10.213 + sudo kill 3518987 00:29:10.222 [Pipeline] } 00:29:10.243 [Pipeline] // stage 00:29:10.248 [Pipeline] } 00:29:10.266 [Pipeline] // timeout 00:29:10.270 [Pipeline] } 00:29:10.284 [Pipeline] // catchError 00:29:10.289 [Pipeline] } 00:29:10.303 [Pipeline] // wrap 00:29:10.309 [Pipeline] } 00:29:10.332 [Pipeline] // catchError 00:29:10.340 [Pipeline] stage 00:29:10.342 [Pipeline] { (Epilogue) 00:29:10.352 [Pipeline] catchError 00:29:10.353 [Pipeline] { 00:29:10.366 [Pipeline] echo 00:29:10.367 Cleanup processes 00:29:10.370 [Pipeline] sh 00:29:10.649 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:10.649 4022689 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:10.649 4023047 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:10.661 [Pipeline] sh 00:29:11.081 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:11.081 ++ grep -v 'sudo pgrep' 00:29:11.081 ++ awk '{print $1}' 00:29:11.081 + sudo kill -9 4022689 00:29:11.091 [Pipeline] sh 00:29:11.365 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:26.272 [Pipeline] sh 00:29:26.648 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:26.648 Artifacts sizes are good 00:29:26.663 [Pipeline] archiveArtifacts 00:29:26.671 Archiving artifacts 00:29:26.863 [Pipeline] sh 00:29:27.143 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:27.414 [Pipeline] cleanWs 00:29:27.422 [WS-CLEANUP] Deleting project workspace... 00:29:27.422 [WS-CLEANUP] Deferred wipeout is used... 00:29:27.428 [WS-CLEANUP] done 00:29:27.430 [Pipeline] } 00:29:27.448 [Pipeline] // catchError 00:29:27.456 [Pipeline] sh 00:29:27.734 + logger -p user.info -t JENKINS-CI 00:29:27.744 [Pipeline] } 00:29:27.758 [Pipeline] // stage 00:29:27.763 [Pipeline] } 00:29:27.778 [Pipeline] // node 00:29:27.784 [Pipeline] End of Pipeline 00:29:27.820 Finished: SUCCESS